00:00:00.002 Started by upstream project "autotest-nightly" build number 3882 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3262 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.139 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.140 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.146 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.200 Fetching changes from the remote Git repository 00:00:00.203 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.257 Using shallow fetch with depth 1 00:00:00.257 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.257 > git --version # timeout=10 00:00:00.296 > git --version # 'git version 2.39.2' 00:00:00.296 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.319 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.319 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.733 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.758 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.800 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:06.800 > git config core.sparsecheckout # timeout=10 00:00:06.813 > git read-tree -mu HEAD # timeout=10 00:00:06.828 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:06.845 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:06.845 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:06.929 [Pipeline] Start of Pipeline 00:00:06.942 [Pipeline] library 00:00:06.943 Loading library shm_lib@master 00:00:06.943 Library shm_lib@master is cached. Copying from home. 00:00:06.957 [Pipeline] node 00:00:06.966 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:06.968 [Pipeline] { 00:00:06.975 [Pipeline] catchError 00:00:06.977 [Pipeline] { 00:00:06.989 [Pipeline] wrap 00:00:07.000 [Pipeline] { 00:00:07.005 [Pipeline] stage 00:00:07.006 [Pipeline] { (Prologue) 00:00:07.019 [Pipeline] echo 00:00:07.021 Node: VM-host-SM9 00:00:07.025 [Pipeline] cleanWs 00:00:07.032 [WS-CLEANUP] Deleting project workspace... 00:00:07.032 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.039 [WS-CLEANUP] done 00:00:07.196 [Pipeline] setCustomBuildProperty 00:00:07.289 [Pipeline] httpRequest 00:00:07.311 [Pipeline] echo 00:00:07.312 Sorcerer 10.211.164.101 is alive 00:00:07.322 [Pipeline] httpRequest 00:00:07.325 HttpMethod: GET 00:00:07.326 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.327 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.327 Response Code: HTTP/1.1 200 OK 00:00:07.328 Success: Status code 200 is in the accepted range: 200,404 00:00:07.328 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:08.409 [Pipeline] sh 00:00:08.688 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:08.703 [Pipeline] httpRequest 00:00:08.733 [Pipeline] echo 00:00:08.734 Sorcerer 10.211.164.101 is alive 00:00:08.744 [Pipeline] httpRequest 00:00:08.749 HttpMethod: GET 00:00:08.750 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.750 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.766 Response Code: HTTP/1.1 200 OK 00:00:08.767 Success: Status code 200 is in the accepted range: 200,404 00:00:08.767 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:21.224 [Pipeline] sh 00:01:21.503 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:24.876 [Pipeline] sh 00:01:25.154 + git -C spdk log --oneline -n5 00:01:25.154 719d03c6a sock/uring: only register net impl if supported 00:01:25.154 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:25.154 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:25.154 6c7c1f57e accel: add sequence outstanding stat 00:01:25.154 3bc8e6a26 accel: add utility to put task 00:01:25.179 [Pipeline] writeFile 00:01:25.196 [Pipeline] sh 00:01:25.476 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.489 [Pipeline] sh 00:01:25.769 + cat autorun-spdk.conf 00:01:25.769 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.769 SPDK_TEST_NVME=1 00:01:25.769 SPDK_TEST_FTL=1 00:01:25.769 SPDK_TEST_ISAL=1 00:01:25.769 SPDK_RUN_ASAN=1 00:01:25.769 SPDK_RUN_UBSAN=1 00:01:25.769 SPDK_TEST_XNVME=1 00:01:25.769 SPDK_TEST_NVME_FDP=1 00:01:25.769 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.775 RUN_NIGHTLY=1 00:01:25.777 [Pipeline] } 00:01:25.798 [Pipeline] // stage 00:01:25.818 [Pipeline] stage 00:01:25.820 [Pipeline] { (Run VM) 00:01:25.838 [Pipeline] sh 00:01:26.118 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.118 + echo 'Start stage prepare_nvme.sh' 00:01:26.118 Start stage prepare_nvme.sh 00:01:26.118 + [[ -n 0 ]] 00:01:26.118 + disk_prefix=ex0 00:01:26.118 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:01:26.118 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:01:26.118 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:01:26.118 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.118 ++ SPDK_TEST_NVME=1 00:01:26.118 ++ SPDK_TEST_FTL=1 00:01:26.118 ++ SPDK_TEST_ISAL=1 00:01:26.118 ++ SPDK_RUN_ASAN=1 00:01:26.118 ++ SPDK_RUN_UBSAN=1 00:01:26.118 ++ SPDK_TEST_XNVME=1 00:01:26.118 ++ SPDK_TEST_NVME_FDP=1 00:01:26.118 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.118 ++ RUN_NIGHTLY=1 00:01:26.118 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:01:26.118 + nvme_files=() 00:01:26.118 + declare -A nvme_files 00:01:26.118 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.118 + nvme_files['nvme.img']=5G 00:01:26.118 + nvme_files['nvme-cmb.img']=5G 00:01:26.118 + nvme_files['nvme-multi0.img']=4G 00:01:26.118 + nvme_files['nvme-multi1.img']=4G 00:01:26.118 + nvme_files['nvme-multi2.img']=4G 00:01:26.118 + nvme_files['nvme-openstack.img']=8G 00:01:26.118 + nvme_files['nvme-zns.img']=5G 00:01:26.118 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.118 + (( SPDK_TEST_FTL == 1 )) 00:01:26.118 + nvme_files["nvme-ftl.img"]=6G 00:01:26.118 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.118 + nvme_files["nvme-fdp.img"]=1G 00:01:26.118 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.118 + for nvme in "${!nvme_files[@]}" 00:01:26.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:26.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.118 + for nvme in "${!nvme_files[@]}" 00:01:26.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:26.118 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:26.118 + for nvme in "${!nvme_files[@]}" 00:01:26.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:26.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.377 + for nvme in "${!nvme_files[@]}" 00:01:26.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:26.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.377 + for nvme in "${!nvme_files[@]}" 00:01:26.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:26.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.377 + for nvme in "${!nvme_files[@]}" 00:01:26.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:26.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.377 + for nvme in "${!nvme_files[@]}" 00:01:26.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:26.635 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.635 + for nvme in "${!nvme_files[@]}" 00:01:26.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:26.635 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:26.635 + for nvme in "${!nvme_files[@]}" 00:01:26.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:26.892 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.892 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:26.892 + echo 'End stage prepare_nvme.sh' 00:01:26.892 End stage prepare_nvme.sh 00:01:26.900 [Pipeline] sh 00:01:27.172 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:27.172 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:27.172 00:01:27.172 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:01:27.172 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:01:27.172 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:01:27.172 HELP=0 00:01:27.172 DRY_RUN=0 00:01:27.172 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:27.172 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:27.172 NVME_AUTO_CREATE=0 00:01:27.172 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:27.172 NVME_CMB=,,,, 00:01:27.172 NVME_PMR=,,,, 00:01:27.172 NVME_ZNS=,,,, 00:01:27.172 NVME_MS=true,,,, 00:01:27.172 NVME_FDP=,,,on, 00:01:27.172 SPDK_VAGRANT_DISTRO=fedora38 00:01:27.172 SPDK_VAGRANT_VMCPU=10 00:01:27.172 SPDK_VAGRANT_VMRAM=12288 00:01:27.172 SPDK_VAGRANT_PROVIDER=libvirt 00:01:27.172 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:27.172 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:27.172 SPDK_OPENSTACK_NETWORK=0 00:01:27.172 VAGRANT_PACKAGE_BOX=0 00:01:27.172 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:27.172 FORCE_DISTRO=true 00:01:27.172 VAGRANT_BOX_VERSION= 00:01:27.172 EXTRA_VAGRANTFILES= 00:01:27.172 NIC_MODEL=e1000 00:01:27.172 00:01:27.172 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt' 00:01:27.172 /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:01:30.454 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.388 ==> default: Creating image (snapshot of base box volume). 00:01:31.388 ==> default: Creating domain with the following settings... 00:01:31.388 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720775237_0043567a1be33cfbb6e7 00:01:31.388 ==> default: -- Domain type: kvm 00:01:31.388 ==> default: -- Cpus: 10 00:01:31.388 ==> default: -- Feature: acpi 00:01:31.388 ==> default: -- Feature: apic 00:01:31.388 ==> default: -- Feature: pae 00:01:31.388 ==> default: -- Memory: 12288M 00:01:31.388 ==> default: -- Memory Backing: hugepages: 00:01:31.388 ==> default: -- Management MAC: 00:01:31.388 ==> default: -- Loader: 00:01:31.388 ==> default: -- Nvram: 00:01:31.388 ==> default: -- Base box: spdk/fedora38 00:01:31.388 ==> default: -- Storage pool: default 00:01:31.388 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720775237_0043567a1be33cfbb6e7.img (20G) 00:01:31.388 ==> default: -- Volume Cache: default 00:01:31.388 ==> default: -- Kernel: 00:01:31.388 ==> default: -- Initrd: 00:01:31.388 ==> default: -- Graphics Type: vnc 00:01:31.388 ==> default: -- Graphics Port: -1 00:01:31.388 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.388 ==> default: -- Graphics Password: Not defined 00:01:31.388 ==> default: -- Video Type: cirrus 00:01:31.388 ==> default: -- Video VRAM: 9216 00:01:31.388 ==> default: -- Sound Type: 00:01:31.388 ==> default: -- Keymap: en-us 00:01:31.388 ==> default: -- TPM Path: 00:01:31.388 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.388 ==> default: -- Command line args: 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:31.388 ==> default: -> value=-drive, 00:01:31.388 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:31.388 ==> default: -> value=-drive, 00:01:31.388 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:31.388 ==> default: -> value=-drive, 00:01:31.388 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.388 ==> default: -> value=-drive, 00:01:31.388 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:31.388 ==> default: -> value=-device, 00:01:31.388 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.388 ==> default: -> value=-drive, 00:01:31.388 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:31.388 ==> default: -> value=-device, 00:01:31.389 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.389 ==> default: -> value=-device, 00:01:31.389 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:31.389 ==> default: -> value=-device, 00:01:31.389 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:31.389 ==> default: -> value=-drive, 00:01:31.389 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:31.389 ==> default: -> value=-device, 00:01:31.389 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.647 ==> default: Creating shared folders metadata... 00:01:31.647 ==> default: Starting domain. 00:01:33.036 ==> default: Waiting for domain to get an IP address... 00:01:55.010 ==> default: Waiting for SSH to become available... 00:01:55.577 ==> default: Configuring and enabling network interfaces... 00:01:59.791 default: SSH address: 192.168.121.106:22 00:01:59.791 default: SSH username: vagrant 00:01:59.791 default: SSH auth method: private key 00:02:02.326 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.502 ==> default: Mounting SSHFS shared folder... 00:02:11.877 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:11.877 ==> default: Checking Mount.. 00:02:12.812 ==> default: Folder Successfully Mounted! 00:02:12.812 ==> default: Running provisioner: file... 00:02:13.748 default: ~/.gitconfig => .gitconfig 00:02:14.006 00:02:14.006 SUCCESS! 00:02:14.006 00:02:14.006 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt and type "vagrant ssh" to use. 00:02:14.006 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:14.006 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt" to destroy all trace of vm. 00:02:14.006 00:02:14.016 [Pipeline] } 00:02:14.035 [Pipeline] // stage 00:02:14.046 [Pipeline] dir 00:02:14.047 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora38-libvirt 00:02:14.048 [Pipeline] { 00:02:14.063 [Pipeline] catchError 00:02:14.065 [Pipeline] { 00:02:14.080 [Pipeline] sh 00:02:14.357 + vagrant ssh-config --host vagrant 00:02:14.357 + sed -ne /^Host/,$p 00:02:14.357 + tee ssh_conf 00:02:18.543 Host vagrant 00:02:18.543 HostName 192.168.121.106 00:02:18.543 User vagrant 00:02:18.543 Port 22 00:02:18.543 UserKnownHostsFile /dev/null 00:02:18.543 StrictHostKeyChecking no 00:02:18.543 PasswordAuthentication no 00:02:18.543 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:18.543 IdentitiesOnly yes 00:02:18.543 LogLevel FATAL 00:02:18.543 ForwardAgent yes 00:02:18.543 ForwardX11 yes 00:02:18.543 00:02:18.560 [Pipeline] withEnv 00:02:18.563 [Pipeline] { 00:02:18.579 [Pipeline] sh 00:02:18.859 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:18.860 source /etc/os-release 00:02:18.860 [[ -e /image.version ]] && img=$(< /image.version) 00:02:18.860 # Minimal, systemd-like check. 00:02:18.860 if [[ -e /.dockerenv ]]; then 00:02:18.860 # Clear garbage from the node's name: 00:02:18.860 # agt-er_autotest_547-896 -> autotest_547-896 00:02:18.860 # $HOSTNAME is the actual container id 00:02:18.860 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:18.860 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:18.860 # We can assume this is a mount from a host where container is running, 00:02:18.860 # so fetch its hostname to easily identify the target swarm worker. 00:02:18.860 container="$(< /etc/hostname) ($agent)" 00:02:18.860 else 00:02:18.860 # Fallback 00:02:18.860 container=$agent 00:02:18.860 fi 00:02:18.860 fi 00:02:18.860 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:18.860 00:02:18.869 [Pipeline] } 00:02:18.884 [Pipeline] // withEnv 00:02:18.892 [Pipeline] setCustomBuildProperty 00:02:18.906 [Pipeline] stage 00:02:18.908 [Pipeline] { (Tests) 00:02:18.927 [Pipeline] sh 00:02:19.205 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.476 [Pipeline] sh 00:02:19.755 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:20.029 [Pipeline] timeout 00:02:20.029 Timeout set to expire in 40 min 00:02:20.031 [Pipeline] { 00:02:20.048 [Pipeline] sh 00:02:20.329 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.898 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:20.914 [Pipeline] sh 00:02:21.193 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:21.467 [Pipeline] sh 00:02:21.746 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:22.028 [Pipeline] sh 00:02:22.313 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:22.313 ++ readlink -f spdk_repo 00:02:22.313 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.313 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.313 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.313 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.313 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.313 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.313 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.313 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:22.313 + cd /home/vagrant/spdk_repo 00:02:22.313 + source /etc/os-release 00:02:22.313 ++ NAME='Fedora Linux' 00:02:22.313 ++ VERSION='38 (Cloud Edition)' 00:02:22.313 ++ ID=fedora 00:02:22.313 ++ VERSION_ID=38 00:02:22.313 ++ VERSION_CODENAME= 00:02:22.313 ++ PLATFORM_ID=platform:f38 00:02:22.313 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:22.313 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.313 ++ LOGO=fedora-logo-icon 00:02:22.313 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:22.313 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.313 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:22.313 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.313 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.313 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.313 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:22.313 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.313 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:22.313 ++ SUPPORT_END=2024-05-14 00:02:22.313 ++ VARIANT='Cloud Edition' 00:02:22.313 ++ VARIANT_ID=cloud 00:02:22.313 + uname -a 00:02:22.313 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:22.313 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:23.139 Hugepages 00:02:23.139 node hugesize free / total 00:02:23.139 node0 1048576kB 0 / 0 00:02:23.139 node0 2048kB 0 / 0 00:02:23.139 00:02:23.139 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.139 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:23.139 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:23.139 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:23.139 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:23.139 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:23.139 + rm -f /tmp/spdk-ld-path 00:02:23.139 + source autorun-spdk.conf 00:02:23.139 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.139 ++ SPDK_TEST_NVME=1 00:02:23.139 ++ SPDK_TEST_FTL=1 00:02:23.139 ++ SPDK_TEST_ISAL=1 00:02:23.139 ++ SPDK_RUN_ASAN=1 00:02:23.139 ++ SPDK_RUN_UBSAN=1 00:02:23.139 ++ SPDK_TEST_XNVME=1 00:02:23.139 ++ SPDK_TEST_NVME_FDP=1 00:02:23.139 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.139 ++ RUN_NIGHTLY=1 00:02:23.139 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:23.139 + [[ -n '' ]] 00:02:23.139 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:23.139 + for M in /var/spdk/build-*-manifest.txt 00:02:23.139 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:23.139 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.139 + for M in /var/spdk/build-*-manifest.txt 00:02:23.139 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:23.139 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.397 ++ uname 00:02:23.397 + [[ Linux == \L\i\n\u\x ]] 00:02:23.397 + sudo dmesg -T 00:02:23.397 + sudo dmesg --clear 00:02:23.397 + dmesg_pid=5204 00:02:23.397 + [[ Fedora Linux == FreeBSD ]] 00:02:23.397 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.397 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.397 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:23.397 + sudo dmesg -Tw 00:02:23.397 + [[ -x /usr/src/fio-static/fio ]] 00:02:23.397 + export FIO_BIN=/usr/src/fio-static/fio 00:02:23.397 + FIO_BIN=/usr/src/fio-static/fio 00:02:23.397 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:23.397 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:23.397 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:23.397 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.397 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.397 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:23.397 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.397 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.397 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.397 Test configuration: 00:02:23.397 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.397 SPDK_TEST_NVME=1 00:02:23.397 SPDK_TEST_FTL=1 00:02:23.397 SPDK_TEST_ISAL=1 00:02:23.397 SPDK_RUN_ASAN=1 00:02:23.397 SPDK_RUN_UBSAN=1 00:02:23.397 SPDK_TEST_XNVME=1 00:02:23.397 SPDK_TEST_NVME_FDP=1 00:02:23.397 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.397 RUN_NIGHTLY=1 09:08:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.397 09:08:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.397 09:08:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.397 09:08:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.397 09:08:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.397 09:08:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.397 09:08:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.397 09:08:09 -- paths/export.sh@5 -- $ export PATH 00:02:23.397 09:08:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.397 09:08:09 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.397 09:08:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:23.397 09:08:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720775289.XXXXXX 00:02:23.397 09:08:09 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720775289.YQF0Gv 00:02:23.397 09:08:09 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:23.397 09:08:09 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:23.397 09:08:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:23.397 09:08:09 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.397 09:08:09 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.397 09:08:09 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:23.397 09:08:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:23.397 09:08:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.397 09:08:09 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:23.397 09:08:09 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:23.397 09:08:09 -- pm/common@17 -- $ local monitor 00:02:23.397 09:08:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.397 09:08:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.397 09:08:09 -- pm/common@21 -- $ date +%s 00:02:23.397 09:08:09 -- pm/common@25 -- $ sleep 1 00:02:23.397 09:08:09 -- pm/common@21 -- $ date +%s 00:02:23.397 09:08:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720775289 00:02:23.397 09:08:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720775289 00:02:23.397 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720775289_collect-vmstat.pm.log 00:02:23.397 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720775289_collect-cpu-load.pm.log 00:02:24.336 09:08:10 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:24.336 09:08:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.336 09:08:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.336 09:08:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.336 09:08:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.336 Fri Jul 12 09:08:10 AM UTC 2024 00:02:24.336 09:08:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.595 v24.09-pre-202-g719d03c6a 00:02:24.595 09:08:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:24.595 09:08:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:24.595 09:08:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:24.595 09:08:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:24.595 09:08:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.595 ************************************ 00:02:24.595 START TEST asan 00:02:24.595 ************************************ 00:02:24.595 using asan 00:02:24.595 09:08:10 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:24.595 00:02:24.595 real 0m0.000s 00:02:24.595 user 0m0.000s 00:02:24.595 sys 0m0.000s 00:02:24.595 09:08:10 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:24.595 09:08:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.595 ************************************ 00:02:24.595 END TEST asan 00:02:24.595 ************************************ 00:02:24.595 09:08:10 -- common/autotest_common.sh@1142 -- $ return 0 00:02:24.595 09:08:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.595 09:08:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.595 09:08:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:24.595 09:08:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:24.595 09:08:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.595 ************************************ 00:02:24.595 START TEST ubsan 00:02:24.595 ************************************ 00:02:24.595 using ubsan 00:02:24.595 09:08:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:24.595 00:02:24.595 real 0m0.000s 00:02:24.595 user 0m0.000s 00:02:24.595 sys 0m0.000s 00:02:24.595 09:08:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:24.595 ************************************ 00:02:24.595 END TEST ubsan 00:02:24.595 09:08:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.595 ************************************ 00:02:24.595 09:08:10 -- common/autotest_common.sh@1142 -- $ return 0 00:02:24.595 09:08:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:24.595 09:08:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.595 09:08:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.595 09:08:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.595 09:08:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.595 09:08:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.595 09:08:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.595 09:08:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.595 09:08:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:24.595 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:24.595 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.161 Using 'verbs' RDMA provider 00:02:38.299 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:53.198 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:53.198 Creating mk/config.mk...done. 00:02:53.198 Creating mk/cc.flags.mk...done. 00:02:53.198 Type 'make' to build. 00:02:53.198 09:08:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:53.198 09:08:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:53.198 09:08:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:53.198 09:08:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.198 ************************************ 00:02:53.198 START TEST make 00:02:53.198 ************************************ 00:02:53.198 09:08:37 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:53.198 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:53.198 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:53.198 meson setup builddir \ 00:02:53.198 -Dwith-libaio=enabled \ 00:02:53.198 -Dwith-liburing=enabled \ 00:02:53.198 -Dwith-libvfn=disabled \ 00:02:53.198 -Dwith-spdk=false && \ 00:02:53.198 meson compile -C builddir && \ 00:02:53.198 cd -) 00:02:53.198 make[1]: Nothing to be done for 'all'. 00:02:55.726 The Meson build system 00:02:55.726 Version: 1.3.1 00:02:55.726 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:55.726 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:55.726 Build type: native build 00:02:55.726 Project name: xnvme 00:02:55.726 Project version: 0.7.3 00:02:55.726 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:55.726 C linker for the host machine: cc ld.bfd 2.39-16 00:02:55.726 Host machine cpu family: x86_64 00:02:55.726 Host machine cpu: x86_64 00:02:55.726 Message: host_machine.system: linux 00:02:55.726 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:55.726 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:55.726 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:55.726 Run-time dependency threads found: YES 00:02:55.726 Has header "setupapi.h" : NO 00:02:55.726 Has header "linux/blkzoned.h" : YES 00:02:55.726 Has header "linux/blkzoned.h" : YES (cached) 00:02:55.726 Has header "libaio.h" : YES 00:02:55.726 Library aio found: YES 00:02:55.726 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:55.726 Run-time dependency liburing found: YES 2.2 00:02:55.726 Dependency libvfn skipped: feature with-libvfn disabled 00:02:55.726 Run-time dependency appleframeworks found: NO (tried framework) 00:02:55.726 Run-time dependency appleframeworks found: NO (tried framework) 00:02:55.726 Configuring xnvme_config.h using configuration 00:02:55.726 Configuring xnvme.spec using configuration 00:02:55.726 Run-time dependency bash-completion found: YES 2.11 00:02:55.726 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:55.726 Program cp found: YES (/usr/bin/cp) 00:02:55.726 Has header "winsock2.h" : NO 00:02:55.726 Has header "dbghelp.h" : NO 00:02:55.726 Library rpcrt4 found: NO 00:02:55.726 Library rt found: YES 00:02:55.726 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:55.726 Found CMake: /usr/bin/cmake (3.27.7) 00:02:55.726 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:55.726 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:55.726 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:55.726 Build targets in project: 32 00:02:55.726 00:02:55.726 xnvme 0.7.3 00:02:55.726 00:02:55.726 User defined options 00:02:55.726 with-libaio : enabled 00:02:55.726 with-liburing: enabled 00:02:55.726 with-libvfn : disabled 00:02:55.726 with-spdk : false 00:02:55.726 00:02:55.726 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.664 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:56.664 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:56.664 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:56.664 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:56.664 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:56.921 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:56.921 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:56.921 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:56.921 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:56.921 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:56.921 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:56.921 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:56.921 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:56.921 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:56.921 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:57.196 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:57.196 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:57.196 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:57.196 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:57.196 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:57.196 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:57.196 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:57.196 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:57.196 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:57.196 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:57.196 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:57.196 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:57.196 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:57.196 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:57.457 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:57.457 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:57.457 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:57.457 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:57.457 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:57.457 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:57.457 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:57.457 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:57.457 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:57.457 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:57.457 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:57.457 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:57.457 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:57.457 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:57.457 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:57.457 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:57.457 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:57.457 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:57.457 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:57.457 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:57.457 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:57.457 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:57.457 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:57.457 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:57.714 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:57.714 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:57.714 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:57.714 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:57.714 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:57.714 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:57.714 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:57.714 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:57.714 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:57.714 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:57.714 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:57.972 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:57.972 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:57.972 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:57.972 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:57.972 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:57.972 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:57.972 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:57.972 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:58.230 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:58.230 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:58.230 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:58.230 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:58.230 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:58.230 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:58.230 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:58.230 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:58.230 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:58.230 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:58.488 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:58.488 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:58.488 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:58.488 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:58.488 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:58.488 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:58.488 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:58.746 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:58.746 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:58.746 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:58.746 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:58.746 [93/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:58.746 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:58.746 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:58.746 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:58.746 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:58.746 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:58.746 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:58.746 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:58.746 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:58.746 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:58.746 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:58.746 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:58.746 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:59.004 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:59.004 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:59.004 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:59.004 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:59.004 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:59.004 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:59.004 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:59.004 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:59.004 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:59.004 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:59.004 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:59.004 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:59.004 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:59.004 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:59.004 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:59.004 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:59.004 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:59.004 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:59.004 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:59.261 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:59.261 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:59.261 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:59.261 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:59.261 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:59.261 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:59.261 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:59.261 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:59.261 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:59.518 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:59.518 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:59.518 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:59.518 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:59.518 [138/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:59.518 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:59.518 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:59.518 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:59.518 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:59.776 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:59.776 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:59.776 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:59.776 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:59.776 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:59.776 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:59.776 [149/203] Linking target lib/libxnvme.so 00:03:00.033 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:00.033 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:00.033 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:00.033 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:00.033 [154/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:00.033 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:00.033 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:00.033 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:00.291 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:00.291 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:00.291 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:00.291 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:00.291 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:00.291 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:00.291 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:00.291 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:00.291 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:00.549 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:00.549 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:00.549 [169/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:00.549 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:00.549 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:00.806 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:00.806 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:00.806 [174/203] Linking static target lib/libxnvme.a 00:03:00.806 [175/203] Linking target tests/xnvme_tests_cli 00:03:00.806 [176/203] Linking target tests/xnvme_tests_async_intf 00:03:00.806 [177/203] Linking target tests/xnvme_tests_lblk 00:03:01.063 [178/203] Linking target tests/xnvme_tests_ioworker 00:03:01.063 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:01.063 [180/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:01.063 [181/203] Linking target tests/xnvme_tests_znd_append 00:03:01.063 [182/203] Linking target tests/xnvme_tests_scc 00:03:01.063 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:03:01.063 [184/203] Linking target tests/xnvme_tests_buf 00:03:01.063 [185/203] Linking target tests/xnvme_tests_enum 00:03:01.063 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:01.063 [187/203] Linking target tests/xnvme_tests_map 00:03:01.063 [188/203] Linking target tests/xnvme_tests_znd_state 00:03:01.063 [189/203] Linking target tools/xdd 00:03:01.063 [190/203] Linking target tools/lblk 00:03:01.063 [191/203] Linking target tools/xnvme_file 00:03:01.063 [192/203] Linking target tests/xnvme_tests_kvs 00:03:01.063 [193/203] Linking target tools/kvs 00:03:01.063 [194/203] Linking target tools/zoned 00:03:01.063 [195/203] Linking target tools/xnvme 00:03:01.063 [196/203] Linking target examples/xnvme_hello 00:03:01.063 [197/203] Linking target examples/xnvme_io_async 00:03:01.063 [198/203] Linking target examples/xnvme_dev 00:03:01.063 [199/203] Linking target examples/xnvme_enum 00:03:01.063 [200/203] Linking target examples/xnvme_single_async 00:03:01.063 [201/203] Linking target examples/zoned_io_async 00:03:01.063 [202/203] Linking target examples/xnvme_single_sync 00:03:01.063 [203/203] Linking target examples/zoned_io_sync 00:03:01.063 INFO: autodetecting backend as ninja 00:03:01.063 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:01.320 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:11.286 The Meson build system 00:03:11.286 Version: 1.3.1 00:03:11.286 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:11.286 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:11.286 Build type: native build 00:03:11.286 Program cat found: YES (/usr/bin/cat) 00:03:11.286 Project name: DPDK 00:03:11.286 Project version: 24.03.0 00:03:11.286 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:11.286 C linker for the host machine: cc ld.bfd 2.39-16 00:03:11.287 Host machine cpu family: x86_64 00:03:11.287 Host machine cpu: x86_64 00:03:11.287 Message: ## Building in Developer Mode ## 00:03:11.287 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:11.287 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:11.287 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:11.287 Program python3 found: YES (/usr/bin/python3) 00:03:11.287 Program cat found: YES (/usr/bin/cat) 00:03:11.287 Compiler for C supports arguments -march=native: YES 00:03:11.287 Checking for size of "void *" : 8 00:03:11.287 Checking for size of "void *" : 8 (cached) 00:03:11.287 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:11.287 Library m found: YES 00:03:11.287 Library numa found: YES 00:03:11.287 Has header "numaif.h" : YES 00:03:11.287 Library fdt found: NO 00:03:11.287 Library execinfo found: NO 00:03:11.287 Has header "execinfo.h" : YES 00:03:11.287 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:11.287 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:11.287 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:11.287 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:11.287 Run-time dependency openssl found: YES 3.0.9 00:03:11.287 Run-time dependency libpcap found: YES 1.10.4 00:03:11.287 Has header "pcap.h" with dependency libpcap: YES 00:03:11.287 Compiler for C supports arguments -Wcast-qual: YES 00:03:11.287 Compiler for C supports arguments -Wdeprecated: YES 00:03:11.287 Compiler for C supports arguments -Wformat: YES 00:03:11.287 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:11.287 Compiler for C supports arguments -Wformat-security: NO 00:03:11.287 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.287 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:11.287 Compiler for C supports arguments -Wnested-externs: YES 00:03:11.287 Compiler for C supports arguments -Wold-style-definition: YES 00:03:11.287 Compiler for C supports arguments -Wpointer-arith: YES 00:03:11.287 Compiler for C supports arguments -Wsign-compare: YES 00:03:11.287 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:11.287 Compiler for C supports arguments -Wundef: YES 00:03:11.287 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.287 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:11.287 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:11.287 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.287 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:11.287 Program objdump found: YES (/usr/bin/objdump) 00:03:11.287 Compiler for C supports arguments -mavx512f: YES 00:03:11.287 Checking if "AVX512 checking" compiles: YES 00:03:11.287 Fetching value of define "__SSE4_2__" : 1 00:03:11.287 Fetching value of define "__AES__" : 1 00:03:11.287 Fetching value of define "__AVX__" : 1 00:03:11.287 Fetching value of define "__AVX2__" : 1 00:03:11.287 Fetching value of define "__AVX512BW__" : (undefined) 00:03:11.287 Fetching value of define "__AVX512CD__" : (undefined) 00:03:11.287 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:11.287 Fetching value of define "__AVX512F__" : (undefined) 00:03:11.287 Fetching value of define "__AVX512VL__" : (undefined) 00:03:11.287 Fetching value of define "__PCLMUL__" : 1 00:03:11.287 Fetching value of define "__RDRND__" : 1 00:03:11.287 Fetching value of define "__RDSEED__" : 1 00:03:11.287 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:11.287 Fetching value of define "__znver1__" : (undefined) 00:03:11.287 Fetching value of define "__znver2__" : (undefined) 00:03:11.287 Fetching value of define "__znver3__" : (undefined) 00:03:11.287 Fetching value of define "__znver4__" : (undefined) 00:03:11.287 Library asan found: YES 00:03:11.287 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:11.287 Message: lib/log: Defining dependency "log" 00:03:11.287 Message: lib/kvargs: Defining dependency "kvargs" 00:03:11.287 Message: lib/telemetry: Defining dependency "telemetry" 00:03:11.287 Library rt found: YES 00:03:11.287 Checking for function "getentropy" : NO 00:03:11.287 Message: lib/eal: Defining dependency "eal" 00:03:11.287 Message: lib/ring: Defining dependency "ring" 00:03:11.287 Message: lib/rcu: Defining dependency "rcu" 00:03:11.287 Message: lib/mempool: Defining dependency "mempool" 00:03:11.287 Message: lib/mbuf: Defining dependency "mbuf" 00:03:11.287 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:11.287 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:11.287 Compiler for C supports arguments -mpclmul: YES 00:03:11.287 Compiler for C supports arguments -maes: YES 00:03:11.287 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.287 Compiler for C supports arguments -mavx512bw: YES 00:03:11.287 Compiler for C supports arguments -mavx512dq: YES 00:03:11.287 Compiler for C supports arguments -mavx512vl: YES 00:03:11.287 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:11.287 Compiler for C supports arguments -mavx2: YES 00:03:11.287 Compiler for C supports arguments -mavx: YES 00:03:11.287 Message: lib/net: Defining dependency "net" 00:03:11.287 Message: lib/meter: Defining dependency "meter" 00:03:11.287 Message: lib/ethdev: Defining dependency "ethdev" 00:03:11.287 Message: lib/pci: Defining dependency "pci" 00:03:11.287 Message: lib/cmdline: Defining dependency "cmdline" 00:03:11.287 Message: lib/hash: Defining dependency "hash" 00:03:11.287 Message: lib/timer: Defining dependency "timer" 00:03:11.287 Message: lib/compressdev: Defining dependency "compressdev" 00:03:11.287 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:11.287 Message: lib/dmadev: Defining dependency "dmadev" 00:03:11.287 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:11.287 Message: lib/power: Defining dependency "power" 00:03:11.287 Message: lib/reorder: Defining dependency "reorder" 00:03:11.287 Message: lib/security: Defining dependency "security" 00:03:11.287 Has header "linux/userfaultfd.h" : YES 00:03:11.287 Has header "linux/vduse.h" : YES 00:03:11.287 Message: lib/vhost: Defining dependency "vhost" 00:03:11.287 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.287 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.287 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.287 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.287 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:11.287 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:11.287 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:11.287 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:11.287 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:11.287 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:11.287 Program doxygen found: YES (/usr/bin/doxygen) 00:03:11.287 Configuring doxy-api-html.conf using configuration 00:03:11.287 Configuring doxy-api-man.conf using configuration 00:03:11.287 Program mandb found: YES (/usr/bin/mandb) 00:03:11.287 Program sphinx-build found: NO 00:03:11.287 Configuring rte_build_config.h using configuration 00:03:11.287 Message: 00:03:11.287 ================= 00:03:11.287 Applications Enabled 00:03:11.287 ================= 00:03:11.287 00:03:11.287 apps: 00:03:11.287 00:03:11.287 00:03:11.287 Message: 00:03:11.287 ================= 00:03:11.287 Libraries Enabled 00:03:11.287 ================= 00:03:11.287 00:03:11.287 libs: 00:03:11.287 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.287 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:11.287 cryptodev, dmadev, power, reorder, security, vhost, 00:03:11.287 00:03:11.287 Message: 00:03:11.287 =============== 00:03:11.287 Drivers Enabled 00:03:11.287 =============== 00:03:11.287 00:03:11.287 common: 00:03:11.287 00:03:11.287 bus: 00:03:11.287 pci, vdev, 00:03:11.287 mempool: 00:03:11.287 ring, 00:03:11.287 dma: 00:03:11.287 00:03:11.287 net: 00:03:11.287 00:03:11.287 crypto: 00:03:11.287 00:03:11.287 compress: 00:03:11.287 00:03:11.287 vdpa: 00:03:11.287 00:03:11.287 00:03:11.287 Message: 00:03:11.287 ================= 00:03:11.287 Content Skipped 00:03:11.287 ================= 00:03:11.287 00:03:11.287 apps: 00:03:11.287 dumpcap: explicitly disabled via build config 00:03:11.287 graph: explicitly disabled via build config 00:03:11.287 pdump: explicitly disabled via build config 00:03:11.287 proc-info: explicitly disabled via build config 00:03:11.287 test-acl: explicitly disabled via build config 00:03:11.287 test-bbdev: explicitly disabled via build config 00:03:11.287 test-cmdline: explicitly disabled via build config 00:03:11.287 test-compress-perf: explicitly disabled via build config 00:03:11.287 test-crypto-perf: explicitly disabled via build config 00:03:11.287 test-dma-perf: explicitly disabled via build config 00:03:11.287 test-eventdev: explicitly disabled via build config 00:03:11.287 test-fib: explicitly disabled via build config 00:03:11.287 test-flow-perf: explicitly disabled via build config 00:03:11.287 test-gpudev: explicitly disabled via build config 00:03:11.287 test-mldev: explicitly disabled via build config 00:03:11.287 test-pipeline: explicitly disabled via build config 00:03:11.287 test-pmd: explicitly disabled via build config 00:03:11.287 test-regex: explicitly disabled via build config 00:03:11.287 test-sad: explicitly disabled via build config 00:03:11.287 test-security-perf: explicitly disabled via build config 00:03:11.287 00:03:11.287 libs: 00:03:11.288 argparse: explicitly disabled via build config 00:03:11.288 metrics: explicitly disabled via build config 00:03:11.288 acl: explicitly disabled via build config 00:03:11.288 bbdev: explicitly disabled via build config 00:03:11.288 bitratestats: explicitly disabled via build config 00:03:11.288 bpf: explicitly disabled via build config 00:03:11.288 cfgfile: explicitly disabled via build config 00:03:11.288 distributor: explicitly disabled via build config 00:03:11.288 efd: explicitly disabled via build config 00:03:11.288 eventdev: explicitly disabled via build config 00:03:11.288 dispatcher: explicitly disabled via build config 00:03:11.288 gpudev: explicitly disabled via build config 00:03:11.288 gro: explicitly disabled via build config 00:03:11.288 gso: explicitly disabled via build config 00:03:11.288 ip_frag: explicitly disabled via build config 00:03:11.288 jobstats: explicitly disabled via build config 00:03:11.288 latencystats: explicitly disabled via build config 00:03:11.288 lpm: explicitly disabled via build config 00:03:11.288 member: explicitly disabled via build config 00:03:11.288 pcapng: explicitly disabled via build config 00:03:11.288 rawdev: explicitly disabled via build config 00:03:11.288 regexdev: explicitly disabled via build config 00:03:11.288 mldev: explicitly disabled via build config 00:03:11.288 rib: explicitly disabled via build config 00:03:11.288 sched: explicitly disabled via build config 00:03:11.288 stack: explicitly disabled via build config 00:03:11.288 ipsec: explicitly disabled via build config 00:03:11.288 pdcp: explicitly disabled via build config 00:03:11.288 fib: explicitly disabled via build config 00:03:11.288 port: explicitly disabled via build config 00:03:11.288 pdump: explicitly disabled via build config 00:03:11.288 table: explicitly disabled via build config 00:03:11.288 pipeline: explicitly disabled via build config 00:03:11.288 graph: explicitly disabled via build config 00:03:11.288 node: explicitly disabled via build config 00:03:11.288 00:03:11.288 drivers: 00:03:11.288 common/cpt: not in enabled drivers build config 00:03:11.288 common/dpaax: not in enabled drivers build config 00:03:11.288 common/iavf: not in enabled drivers build config 00:03:11.288 common/idpf: not in enabled drivers build config 00:03:11.288 common/ionic: not in enabled drivers build config 00:03:11.288 common/mvep: not in enabled drivers build config 00:03:11.288 common/octeontx: not in enabled drivers build config 00:03:11.288 bus/auxiliary: not in enabled drivers build config 00:03:11.288 bus/cdx: not in enabled drivers build config 00:03:11.288 bus/dpaa: not in enabled drivers build config 00:03:11.288 bus/fslmc: not in enabled drivers build config 00:03:11.288 bus/ifpga: not in enabled drivers build config 00:03:11.288 bus/platform: not in enabled drivers build config 00:03:11.288 bus/uacce: not in enabled drivers build config 00:03:11.288 bus/vmbus: not in enabled drivers build config 00:03:11.288 common/cnxk: not in enabled drivers build config 00:03:11.288 common/mlx5: not in enabled drivers build config 00:03:11.288 common/nfp: not in enabled drivers build config 00:03:11.288 common/nitrox: not in enabled drivers build config 00:03:11.288 common/qat: not in enabled drivers build config 00:03:11.288 common/sfc_efx: not in enabled drivers build config 00:03:11.288 mempool/bucket: not in enabled drivers build config 00:03:11.288 mempool/cnxk: not in enabled drivers build config 00:03:11.288 mempool/dpaa: not in enabled drivers build config 00:03:11.288 mempool/dpaa2: not in enabled drivers build config 00:03:11.288 mempool/octeontx: not in enabled drivers build config 00:03:11.288 mempool/stack: not in enabled drivers build config 00:03:11.288 dma/cnxk: not in enabled drivers build config 00:03:11.288 dma/dpaa: not in enabled drivers build config 00:03:11.288 dma/dpaa2: not in enabled drivers build config 00:03:11.288 dma/hisilicon: not in enabled drivers build config 00:03:11.288 dma/idxd: not in enabled drivers build config 00:03:11.288 dma/ioat: not in enabled drivers build config 00:03:11.288 dma/skeleton: not in enabled drivers build config 00:03:11.288 net/af_packet: not in enabled drivers build config 00:03:11.288 net/af_xdp: not in enabled drivers build config 00:03:11.288 net/ark: not in enabled drivers build config 00:03:11.288 net/atlantic: not in enabled drivers build config 00:03:11.288 net/avp: not in enabled drivers build config 00:03:11.288 net/axgbe: not in enabled drivers build config 00:03:11.288 net/bnx2x: not in enabled drivers build config 00:03:11.288 net/bnxt: not in enabled drivers build config 00:03:11.288 net/bonding: not in enabled drivers build config 00:03:11.288 net/cnxk: not in enabled drivers build config 00:03:11.288 net/cpfl: not in enabled drivers build config 00:03:11.288 net/cxgbe: not in enabled drivers build config 00:03:11.288 net/dpaa: not in enabled drivers build config 00:03:11.288 net/dpaa2: not in enabled drivers build config 00:03:11.288 net/e1000: not in enabled drivers build config 00:03:11.288 net/ena: not in enabled drivers build config 00:03:11.288 net/enetc: not in enabled drivers build config 00:03:11.288 net/enetfec: not in enabled drivers build config 00:03:11.288 net/enic: not in enabled drivers build config 00:03:11.288 net/failsafe: not in enabled drivers build config 00:03:11.288 net/fm10k: not in enabled drivers build config 00:03:11.288 net/gve: not in enabled drivers build config 00:03:11.288 net/hinic: not in enabled drivers build config 00:03:11.288 net/hns3: not in enabled drivers build config 00:03:11.288 net/i40e: not in enabled drivers build config 00:03:11.288 net/iavf: not in enabled drivers build config 00:03:11.288 net/ice: not in enabled drivers build config 00:03:11.288 net/idpf: not in enabled drivers build config 00:03:11.288 net/igc: not in enabled drivers build config 00:03:11.288 net/ionic: not in enabled drivers build config 00:03:11.288 net/ipn3ke: not in enabled drivers build config 00:03:11.288 net/ixgbe: not in enabled drivers build config 00:03:11.288 net/mana: not in enabled drivers build config 00:03:11.288 net/memif: not in enabled drivers build config 00:03:11.288 net/mlx4: not in enabled drivers build config 00:03:11.288 net/mlx5: not in enabled drivers build config 00:03:11.288 net/mvneta: not in enabled drivers build config 00:03:11.288 net/mvpp2: not in enabled drivers build config 00:03:11.288 net/netvsc: not in enabled drivers build config 00:03:11.288 net/nfb: not in enabled drivers build config 00:03:11.288 net/nfp: not in enabled drivers build config 00:03:11.288 net/ngbe: not in enabled drivers build config 00:03:11.288 net/null: not in enabled drivers build config 00:03:11.288 net/octeontx: not in enabled drivers build config 00:03:11.288 net/octeon_ep: not in enabled drivers build config 00:03:11.288 net/pcap: not in enabled drivers build config 00:03:11.288 net/pfe: not in enabled drivers build config 00:03:11.288 net/qede: not in enabled drivers build config 00:03:11.288 net/ring: not in enabled drivers build config 00:03:11.288 net/sfc: not in enabled drivers build config 00:03:11.288 net/softnic: not in enabled drivers build config 00:03:11.288 net/tap: not in enabled drivers build config 00:03:11.288 net/thunderx: not in enabled drivers build config 00:03:11.288 net/txgbe: not in enabled drivers build config 00:03:11.288 net/vdev_netvsc: not in enabled drivers build config 00:03:11.288 net/vhost: not in enabled drivers build config 00:03:11.288 net/virtio: not in enabled drivers build config 00:03:11.288 net/vmxnet3: not in enabled drivers build config 00:03:11.288 raw/*: missing internal dependency, "rawdev" 00:03:11.288 crypto/armv8: not in enabled drivers build config 00:03:11.288 crypto/bcmfs: not in enabled drivers build config 00:03:11.288 crypto/caam_jr: not in enabled drivers build config 00:03:11.288 crypto/ccp: not in enabled drivers build config 00:03:11.288 crypto/cnxk: not in enabled drivers build config 00:03:11.288 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.288 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.288 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.288 crypto/mlx5: not in enabled drivers build config 00:03:11.288 crypto/mvsam: not in enabled drivers build config 00:03:11.288 crypto/nitrox: not in enabled drivers build config 00:03:11.288 crypto/null: not in enabled drivers build config 00:03:11.288 crypto/octeontx: not in enabled drivers build config 00:03:11.288 crypto/openssl: not in enabled drivers build config 00:03:11.288 crypto/scheduler: not in enabled drivers build config 00:03:11.288 crypto/uadk: not in enabled drivers build config 00:03:11.288 crypto/virtio: not in enabled drivers build config 00:03:11.288 compress/isal: not in enabled drivers build config 00:03:11.288 compress/mlx5: not in enabled drivers build config 00:03:11.288 compress/nitrox: not in enabled drivers build config 00:03:11.288 compress/octeontx: not in enabled drivers build config 00:03:11.288 compress/zlib: not in enabled drivers build config 00:03:11.288 regex/*: missing internal dependency, "regexdev" 00:03:11.288 ml/*: missing internal dependency, "mldev" 00:03:11.288 vdpa/ifc: not in enabled drivers build config 00:03:11.288 vdpa/mlx5: not in enabled drivers build config 00:03:11.288 vdpa/nfp: not in enabled drivers build config 00:03:11.288 vdpa/sfc: not in enabled drivers build config 00:03:11.288 event/*: missing internal dependency, "eventdev" 00:03:11.288 baseband/*: missing internal dependency, "bbdev" 00:03:11.288 gpu/*: missing internal dependency, "gpudev" 00:03:11.288 00:03:11.288 00:03:11.288 Build targets in project: 85 00:03:11.288 00:03:11.288 DPDK 24.03.0 00:03:11.288 00:03:11.288 User defined options 00:03:11.288 buildtype : debug 00:03:11.288 default_library : shared 00:03:11.288 libdir : lib 00:03:11.288 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:11.288 b_sanitize : address 00:03:11.288 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:11.288 c_link_args : 00:03:11.288 cpu_instruction_set: native 00:03:11.288 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:11.288 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:11.288 enable_docs : false 00:03:11.288 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:11.288 enable_kmods : false 00:03:11.288 max_lcores : 128 00:03:11.288 tests : false 00:03:11.288 00:03:11.288 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.854 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:12.113 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:12.113 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:12.113 [3/268] Linking static target lib/librte_kvargs.a 00:03:12.113 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:12.113 [5/268] Linking static target lib/librte_log.a 00:03:12.113 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:12.679 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.937 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:13.195 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:13.195 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:13.454 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.454 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:13.454 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:13.454 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.454 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:13.712 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:13.712 [17/268] Linking target lib/librte_log.so.24.1 00:03:13.970 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:13.970 [19/268] Linking static target lib/librte_telemetry.a 00:03:13.970 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:13.970 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:14.228 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:14.794 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:14.794 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:15.052 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:15.052 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:15.052 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:15.052 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.052 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:15.052 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:15.052 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:15.310 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:15.310 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:15.568 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:15.826 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:15.826 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:16.085 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:16.085 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:16.343 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:16.343 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:16.602 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:16.602 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:16.602 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:16.859 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:16.859 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:17.116 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:17.116 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:17.374 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:17.632 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:17.632 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:17.889 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:17.889 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:18.454 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:18.454 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:18.454 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:18.454 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:18.712 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:18.712 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:18.970 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:19.228 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:19.228 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:19.228 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:19.228 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:19.486 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:19.744 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:20.065 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:20.346 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:20.346 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:20.603 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:20.861 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:20.861 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:21.119 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:21.119 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:21.119 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:21.119 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:21.119 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:21.377 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:21.636 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:21.636 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:21.895 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:22.153 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:22.153 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:22.719 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:22.978 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:22.978 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:22.978 [86/268] Linking static target lib/librte_ring.a 00:03:22.978 [87/268] Linking static target lib/librte_eal.a 00:03:23.544 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:23.544 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:23.544 [90/268] Linking static target lib/librte_rcu.a 00:03:23.544 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:23.544 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:23.544 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:23.803 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.803 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:23.803 [96/268] Linking static target lib/librte_mempool.a 00:03:24.370 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.370 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:25.305 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:25.305 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:25.305 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:25.305 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:25.305 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:25.305 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:25.562 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:25.562 [106/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.821 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:25.821 [108/268] Linking static target lib/librte_net.a 00:03:25.821 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:25.821 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:25.821 [111/268] Linking static target lib/librte_meter.a 00:03:26.079 [112/268] Linking static target lib/librte_mbuf.a 00:03:26.646 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.646 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:26.646 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.646 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:26.646 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:27.210 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:27.468 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:27.468 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.033 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.291 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:28.291 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:28.291 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:28.549 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:28.549 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:28.549 [127/268] Linking static target lib/librte_pci.a 00:03:28.549 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.549 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:28.807 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:29.065 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:29.065 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.065 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:29.324 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:29.324 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:29.324 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:29.324 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:29.583 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:29.583 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:29.583 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:29.583 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:29.840 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:29.840 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:29.840 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:30.098 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:30.099 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:30.099 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:30.099 [148/268] Linking static target lib/librte_cmdline.a 00:03:30.665 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:30.665 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:31.231 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:31.231 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:31.231 [153/268] Linking static target lib/librte_ethdev.a 00:03:31.231 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:31.231 [155/268] Linking static target lib/librte_timer.a 00:03:31.488 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:31.488 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:31.746 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:32.313 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.313 [160/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.313 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:32.313 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:32.572 [163/268] Linking static target lib/librte_hash.a 00:03:32.572 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:32.572 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:32.572 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:32.572 [167/268] Linking static target lib/librte_compressdev.a 00:03:33.136 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:33.136 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:33.136 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:33.136 [171/268] Linking static target lib/librte_dmadev.a 00:03:33.699 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:33.956 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:33.956 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:33.956 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.956 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:33.956 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.519 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.776 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:34.776 [180/268] Linking static target lib/librte_cryptodev.a 00:03:34.776 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:34.776 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:35.034 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:35.034 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:35.291 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:35.291 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:35.291 [187/268] Linking static target lib/librte_reorder.a 00:03:35.291 [188/268] Linking static target lib/librte_power.a 00:03:36.223 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:36.223 [190/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.223 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:36.223 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:36.223 [193/268] Linking static target lib/librte_security.a 00:03:36.223 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:36.788 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.788 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:37.045 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.302 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:37.302 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:37.867 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.867 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:37.867 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:37.867 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:38.124 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:38.124 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:38.382 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:38.382 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:38.640 [208/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.640 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:38.640 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:38.898 [211/268] Linking target lib/librte_eal.so.24.1 00:03:38.898 [212/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:38.898 [213/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:38.898 [214/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:38.898 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:39.156 [216/268] Linking target lib/librte_ring.so.24.1 00:03:39.156 [217/268] Linking target lib/librte_pci.so.24.1 00:03:39.156 [218/268] Linking target lib/librte_meter.so.24.1 00:03:39.156 [219/268] Linking target lib/librte_timer.so.24.1 00:03:39.156 [220/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:39.156 [221/268] Linking target lib/librte_dmadev.so.24.1 00:03:39.156 [222/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.156 [223/268] Linking static target drivers/librte_bus_pci.a 00:03:39.156 [224/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:39.156 [225/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:39.156 [226/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:39.156 [227/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.156 [228/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:39.415 [229/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:39.415 [230/268] Linking target lib/librte_rcu.so.24.1 00:03:39.415 [231/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.415 [232/268] Linking target lib/librte_mempool.so.24.1 00:03:39.415 [233/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.415 [234/268] Linking static target drivers/librte_bus_vdev.a 00:03:39.415 [235/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:39.415 [236/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:39.415 [237/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:39.415 [238/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:39.673 [239/268] Linking target lib/librte_mbuf.so.24.1 00:03:39.673 [240/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:39.673 [241/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.673 [242/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:39.673 [243/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:39.673 [244/268] Linking static target drivers/librte_mempool_ring.a 00:03:39.673 [245/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:39.673 [246/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.673 [247/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:39.673 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:39.673 [249/268] Linking target lib/librte_compressdev.so.24.1 00:03:39.673 [250/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:39.673 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:39.673 [252/268] Linking target lib/librte_net.so.24.1 00:03:40.029 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:40.029 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:40.029 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:40.029 [256/268] Linking target lib/librte_cmdline.so.24.1 00:03:40.029 [257/268] Linking target lib/librte_hash.so.24.1 00:03:40.029 [258/268] Linking target lib/librte_security.so.24.1 00:03:40.322 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:41.257 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.257 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:41.257 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:41.257 [263/268] Linking target lib/librte_power.so.24.1 00:03:41.824 [264/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:46.010 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:46.010 [266/268] Linking static target lib/librte_vhost.a 00:03:47.383 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.383 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:47.383 INFO: autodetecting backend as ninja 00:03:47.383 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:48.757 CC lib/ut/ut.o 00:03:48.757 CC lib/ut_mock/mock.o 00:03:48.757 CC lib/log/log.o 00:03:48.757 CC lib/log/log_deprecated.o 00:03:48.758 CC lib/log/log_flags.o 00:03:48.758 LIB libspdk_ut.a 00:03:48.758 LIB libspdk_ut_mock.a 00:03:48.758 LIB libspdk_log.a 00:03:48.758 SO libspdk_ut.so.2.0 00:03:48.758 SO libspdk_ut_mock.so.6.0 00:03:49.018 SO libspdk_log.so.7.0 00:03:49.018 SYMLINK libspdk_ut_mock.so 00:03:49.018 SYMLINK libspdk_ut.so 00:03:49.018 SYMLINK libspdk_log.so 00:03:49.276 CC lib/ioat/ioat.o 00:03:49.276 CC lib/dma/dma.o 00:03:49.276 CC lib/util/base64.o 00:03:49.276 CC lib/util/bit_array.o 00:03:49.276 CC lib/util/cpuset.o 00:03:49.276 CC lib/util/crc16.o 00:03:49.276 CC lib/util/crc32.o 00:03:49.276 CC lib/util/crc32c.o 00:03:49.276 CXX lib/trace_parser/trace.o 00:03:49.276 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.534 CC lib/util/crc32_ieee.o 00:03:49.534 CC lib/vfio_user/host/vfio_user.o 00:03:49.534 CC lib/util/crc64.o 00:03:49.534 CC lib/util/dif.o 00:03:49.534 CC lib/util/fd.o 00:03:49.534 LIB libspdk_dma.a 00:03:49.534 CC lib/util/file.o 00:03:49.534 SO libspdk_dma.so.4.0 00:03:49.792 CC lib/util/hexlify.o 00:03:49.793 CC lib/util/iov.o 00:03:49.793 CC lib/util/math.o 00:03:49.793 LIB libspdk_ioat.a 00:03:49.793 SYMLINK libspdk_dma.so 00:03:49.793 CC lib/util/pipe.o 00:03:49.793 CC lib/util/strerror_tls.o 00:03:49.793 CC lib/util/string.o 00:03:50.050 SO libspdk_ioat.so.7.0 00:03:50.050 LIB libspdk_vfio_user.a 00:03:50.050 CC lib/util/uuid.o 00:03:50.050 SO libspdk_vfio_user.so.5.0 00:03:50.050 CC lib/util/fd_group.o 00:03:50.050 SYMLINK libspdk_ioat.so 00:03:50.050 CC lib/util/xor.o 00:03:50.050 CC lib/util/zipf.o 00:03:50.050 SYMLINK libspdk_vfio_user.so 00:03:50.617 LIB libspdk_util.a 00:03:50.617 SO libspdk_util.so.9.1 00:03:50.874 LIB libspdk_trace_parser.a 00:03:50.874 SYMLINK libspdk_util.so 00:03:50.874 SO libspdk_trace_parser.so.5.0 00:03:51.133 CC lib/vmd/vmd.o 00:03:51.133 CC lib/json/json_parse.o 00:03:51.133 CC lib/vmd/led.o 00:03:51.133 CC lib/json/json_util.o 00:03:51.133 CC lib/conf/conf.o 00:03:51.133 CC lib/idxd/idxd.o 00:03:51.133 CC lib/rdma_provider/common.o 00:03:51.133 CC lib/rdma_utils/rdma_utils.o 00:03:51.133 SYMLINK libspdk_trace_parser.so 00:03:51.133 CC lib/env_dpdk/env.o 00:03:51.133 CC lib/env_dpdk/memory.o 00:03:51.392 CC lib/json/json_write.o 00:03:51.392 CC lib/idxd/idxd_user.o 00:03:51.392 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:51.650 LIB libspdk_conf.a 00:03:51.650 SO libspdk_conf.so.6.0 00:03:51.650 CC lib/env_dpdk/pci.o 00:03:51.650 LIB libspdk_json.a 00:03:51.650 SO libspdk_json.so.6.0 00:03:51.650 LIB libspdk_rdma_utils.a 00:03:51.650 SYMLINK libspdk_conf.so 00:03:51.650 SO libspdk_rdma_utils.so.1.0 00:03:51.650 CC lib/env_dpdk/init.o 00:03:51.650 SYMLINK libspdk_json.so 00:03:51.650 CC lib/env_dpdk/threads.o 00:03:51.908 SYMLINK libspdk_rdma_utils.so 00:03:51.908 CC lib/idxd/idxd_kernel.o 00:03:51.908 LIB libspdk_rdma_provider.a 00:03:51.908 SO libspdk_rdma_provider.so.6.0 00:03:51.908 CC lib/env_dpdk/pci_ioat.o 00:03:51.908 CC lib/env_dpdk/pci_virtio.o 00:03:51.908 SYMLINK libspdk_rdma_provider.so 00:03:51.908 CC lib/env_dpdk/pci_vmd.o 00:03:52.167 LIB libspdk_vmd.a 00:03:52.167 LIB libspdk_idxd.a 00:03:52.167 CC lib/jsonrpc/jsonrpc_server.o 00:03:52.167 SO libspdk_idxd.so.12.0 00:03:52.167 SO libspdk_vmd.so.6.0 00:03:52.167 CC lib/env_dpdk/pci_idxd.o 00:03:52.167 CC lib/env_dpdk/pci_event.o 00:03:52.167 CC lib/env_dpdk/sigbus_handler.o 00:03:52.167 SYMLINK libspdk_vmd.so 00:03:52.167 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:52.167 CC lib/env_dpdk/pci_dpdk.o 00:03:52.167 SYMLINK libspdk_idxd.so 00:03:52.167 CC lib/jsonrpc/jsonrpc_client.o 00:03:52.425 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:52.425 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:52.425 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:52.684 LIB libspdk_jsonrpc.a 00:03:52.684 SO libspdk_jsonrpc.so.6.0 00:03:52.684 SYMLINK libspdk_jsonrpc.so 00:03:52.943 CC lib/rpc/rpc.o 00:03:53.202 LIB libspdk_rpc.a 00:03:53.460 SO libspdk_rpc.so.6.0 00:03:53.460 SYMLINK libspdk_rpc.so 00:03:53.460 LIB libspdk_env_dpdk.a 00:03:53.719 CC lib/trace/trace.o 00:03:53.719 CC lib/notify/notify.o 00:03:53.719 CC lib/trace/trace_flags.o 00:03:53.719 CC lib/notify/notify_rpc.o 00:03:53.719 CC lib/keyring/keyring.o 00:03:53.719 CC lib/trace/trace_rpc.o 00:03:53.719 CC lib/keyring/keyring_rpc.o 00:03:53.719 SO libspdk_env_dpdk.so.14.1 00:03:53.977 LIB libspdk_notify.a 00:03:53.977 SYMLINK libspdk_env_dpdk.so 00:03:53.977 SO libspdk_notify.so.6.0 00:03:53.977 LIB libspdk_keyring.a 00:03:53.977 SYMLINK libspdk_notify.so 00:03:53.977 LIB libspdk_trace.a 00:03:53.977 SO libspdk_keyring.so.1.0 00:03:54.234 SO libspdk_trace.so.10.0 00:03:54.234 SYMLINK libspdk_keyring.so 00:03:54.234 SYMLINK libspdk_trace.so 00:03:54.493 CC lib/sock/sock.o 00:03:54.493 CC lib/sock/sock_rpc.o 00:03:54.493 CC lib/thread/thread.o 00:03:54.493 CC lib/thread/iobuf.o 00:03:55.060 LIB libspdk_sock.a 00:03:55.060 SO libspdk_sock.so.10.0 00:03:55.317 SYMLINK libspdk_sock.so 00:03:55.575 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:55.575 CC lib/nvme/nvme_ctrlr.o 00:03:55.575 CC lib/nvme/nvme_fabric.o 00:03:55.575 CC lib/nvme/nvme_ns_cmd.o 00:03:55.575 CC lib/nvme/nvme_ns.o 00:03:55.575 CC lib/nvme/nvme_pcie_common.o 00:03:55.575 CC lib/nvme/nvme_pcie.o 00:03:55.575 CC lib/nvme/nvme_qpair.o 00:03:55.575 CC lib/nvme/nvme.o 00:03:56.947 CC lib/nvme/nvme_quirks.o 00:03:56.947 CC lib/nvme/nvme_transport.o 00:03:56.947 CC lib/nvme/nvme_discovery.o 00:03:56.947 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:57.204 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:57.204 CC lib/nvme/nvme_tcp.o 00:03:57.204 CC lib/nvme/nvme_opal.o 00:03:57.462 LIB libspdk_thread.a 00:03:57.462 SO libspdk_thread.so.10.1 00:03:57.462 SYMLINK libspdk_thread.so 00:03:57.720 CC lib/nvme/nvme_io_msg.o 00:03:57.720 CC lib/nvme/nvme_poll_group.o 00:03:57.977 CC lib/nvme/nvme_zns.o 00:03:57.977 CC lib/nvme/nvme_stubs.o 00:03:58.234 CC lib/nvme/nvme_auth.o 00:03:58.234 CC lib/nvme/nvme_cuse.o 00:03:58.492 CC lib/nvme/nvme_rdma.o 00:03:58.492 CC lib/accel/accel.o 00:03:58.821 CC lib/accel/accel_rpc.o 00:03:58.822 CC lib/blob/blobstore.o 00:03:58.822 CC lib/blob/request.o 00:03:58.822 CC lib/blob/zeroes.o 00:03:59.080 CC lib/accel/accel_sw.o 00:03:59.339 CC lib/init/json_config.o 00:03:59.598 CC lib/init/subsystem.o 00:03:59.598 CC lib/blob/blob_bs_dev.o 00:03:59.598 CC lib/init/subsystem_rpc.o 00:03:59.598 CC lib/init/rpc.o 00:03:59.856 CC lib/virtio/virtio_vfio_user.o 00:03:59.856 CC lib/virtio/virtio.o 00:03:59.856 CC lib/virtio/virtio_vhost_user.o 00:03:59.856 CC lib/virtio/virtio_pci.o 00:03:59.856 LIB libspdk_init.a 00:04:00.114 SO libspdk_init.so.5.0 00:04:00.114 SYMLINK libspdk_init.so 00:04:00.373 CC lib/event/app.o 00:04:00.373 CC lib/event/reactor.o 00:04:00.373 CC lib/event/log_rpc.o 00:04:00.373 CC lib/event/app_rpc.o 00:04:00.373 CC lib/event/scheduler_static.o 00:04:00.373 LIB libspdk_virtio.a 00:04:00.631 LIB libspdk_accel.a 00:04:00.631 SO libspdk_virtio.so.7.0 00:04:00.631 SO libspdk_accel.so.15.1 00:04:00.631 SYMLINK libspdk_virtio.so 00:04:00.631 SYMLINK libspdk_accel.so 00:04:00.889 CC lib/bdev/bdev.o 00:04:00.889 CC lib/bdev/bdev_rpc.o 00:04:00.889 CC lib/bdev/bdev_zone.o 00:04:00.889 CC lib/bdev/scsi_nvme.o 00:04:00.889 CC lib/bdev/part.o 00:04:01.147 LIB libspdk_nvme.a 00:04:01.147 LIB libspdk_event.a 00:04:01.147 SO libspdk_event.so.14.0 00:04:01.405 SYMLINK libspdk_event.so 00:04:01.405 SO libspdk_nvme.so.13.1 00:04:01.972 SYMLINK libspdk_nvme.so 00:04:03.875 LIB libspdk_blob.a 00:04:03.875 SO libspdk_blob.so.11.0 00:04:03.875 SYMLINK libspdk_blob.so 00:04:04.132 CC lib/lvol/lvol.o 00:04:04.132 CC lib/blobfs/blobfs.o 00:04:04.132 CC lib/blobfs/tree.o 00:04:05.503 LIB libspdk_bdev.a 00:04:05.504 SO libspdk_bdev.so.15.1 00:04:05.504 SYMLINK libspdk_bdev.so 00:04:05.504 LIB libspdk_blobfs.a 00:04:05.504 SO libspdk_blobfs.so.10.0 00:04:05.761 SYMLINK libspdk_blobfs.so 00:04:05.761 CC lib/ublk/ublk.o 00:04:05.761 CC lib/ublk/ublk_rpc.o 00:04:05.761 CC lib/nbd/nbd.o 00:04:05.761 CC lib/nbd/nbd_rpc.o 00:04:05.761 CC lib/nvmf/ctrlr.o 00:04:05.761 CC lib/nvmf/ctrlr_discovery.o 00:04:05.761 CC lib/nvmf/ctrlr_bdev.o 00:04:05.761 CC lib/scsi/dev.o 00:04:05.761 CC lib/ftl/ftl_core.o 00:04:06.018 CC lib/ftl/ftl_init.o 00:04:06.018 LIB libspdk_lvol.a 00:04:06.018 SO libspdk_lvol.so.10.0 00:04:06.018 CC lib/nvmf/subsystem.o 00:04:06.276 SYMLINK libspdk_lvol.so 00:04:06.276 CC lib/ftl/ftl_layout.o 00:04:06.276 CC lib/scsi/lun.o 00:04:06.535 CC lib/scsi/port.o 00:04:06.535 CC lib/scsi/scsi.o 00:04:06.535 LIB libspdk_nbd.a 00:04:06.792 SO libspdk_nbd.so.7.0 00:04:06.792 CC lib/scsi/scsi_bdev.o 00:04:06.792 CC lib/ftl/ftl_debug.o 00:04:06.792 CC lib/scsi/scsi_pr.o 00:04:06.792 SYMLINK libspdk_nbd.so 00:04:06.792 CC lib/scsi/scsi_rpc.o 00:04:06.792 CC lib/nvmf/nvmf.o 00:04:06.792 CC lib/nvmf/nvmf_rpc.o 00:04:07.048 CC lib/scsi/task.o 00:04:07.048 LIB libspdk_ublk.a 00:04:07.304 CC lib/ftl/ftl_io.o 00:04:07.304 SO libspdk_ublk.so.3.0 00:04:07.304 SYMLINK libspdk_ublk.so 00:04:07.304 CC lib/ftl/ftl_sb.o 00:04:07.304 CC lib/ftl/ftl_l2p.o 00:04:07.562 CC lib/ftl/ftl_l2p_flat.o 00:04:07.562 CC lib/ftl/ftl_nv_cache.o 00:04:07.820 CC lib/ftl/ftl_band.o 00:04:07.820 CC lib/ftl/ftl_band_ops.o 00:04:07.820 CC lib/ftl/ftl_writer.o 00:04:07.820 CC lib/ftl/ftl_rq.o 00:04:07.820 LIB libspdk_scsi.a 00:04:08.078 SO libspdk_scsi.so.9.0 00:04:08.078 CC lib/ftl/ftl_reloc.o 00:04:08.335 SYMLINK libspdk_scsi.so 00:04:08.335 CC lib/ftl/ftl_l2p_cache.o 00:04:08.335 CC lib/nvmf/transport.o 00:04:08.335 CC lib/nvmf/tcp.o 00:04:08.593 CC lib/iscsi/conn.o 00:04:08.593 CC lib/vhost/vhost.o 00:04:08.852 CC lib/ftl/ftl_p2l.o 00:04:08.852 CC lib/nvmf/stubs.o 00:04:08.852 CC lib/ftl/mngt/ftl_mngt.o 00:04:09.419 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:09.419 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:09.677 CC lib/vhost/vhost_rpc.o 00:04:09.677 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:09.677 CC lib/vhost/vhost_scsi.o 00:04:09.677 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:09.677 CC lib/vhost/vhost_blk.o 00:04:09.935 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:09.935 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:09.935 CC lib/iscsi/init_grp.o 00:04:09.935 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:09.935 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:10.193 CC lib/nvmf/mdns_server.o 00:04:10.450 CC lib/nvmf/rdma.o 00:04:10.450 CC lib/nvmf/auth.o 00:04:10.450 CC lib/iscsi/iscsi.o 00:04:10.450 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:10.708 CC lib/vhost/rte_vhost_user.o 00:04:10.708 CC lib/iscsi/md5.o 00:04:10.708 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:10.965 CC lib/iscsi/param.o 00:04:10.965 CC lib/iscsi/portal_grp.o 00:04:10.965 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:11.223 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:11.481 CC lib/iscsi/tgt_node.o 00:04:11.481 CC lib/ftl/utils/ftl_conf.o 00:04:11.481 CC lib/ftl/utils/ftl_md.o 00:04:11.481 CC lib/iscsi/iscsi_subsystem.o 00:04:11.739 CC lib/iscsi/iscsi_rpc.o 00:04:11.739 CC lib/iscsi/task.o 00:04:11.739 CC lib/ftl/utils/ftl_mempool.o 00:04:11.998 CC lib/ftl/utils/ftl_bitmap.o 00:04:11.998 CC lib/ftl/utils/ftl_property.o 00:04:11.998 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:11.998 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:12.256 LIB libspdk_vhost.a 00:04:12.256 SO libspdk_vhost.so.8.0 00:04:12.256 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:12.256 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:12.256 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:12.256 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:12.256 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:12.514 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:12.514 SYMLINK libspdk_vhost.so 00:04:12.514 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:12.514 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:12.514 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:12.514 CC lib/ftl/base/ftl_base_dev.o 00:04:12.514 CC lib/ftl/base/ftl_base_bdev.o 00:04:12.772 CC lib/ftl/ftl_trace.o 00:04:13.030 LIB libspdk_ftl.a 00:04:13.030 LIB libspdk_iscsi.a 00:04:13.030 SO libspdk_iscsi.so.8.0 00:04:13.288 SO libspdk_ftl.so.9.0 00:04:13.288 SYMLINK libspdk_iscsi.so 00:04:13.855 SYMLINK libspdk_ftl.so 00:04:13.855 LIB libspdk_nvmf.a 00:04:13.855 SO libspdk_nvmf.so.18.1 00:04:14.113 SYMLINK libspdk_nvmf.so 00:04:14.679 CC module/env_dpdk/env_dpdk_rpc.o 00:04:14.679 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:14.679 CC module/keyring/file/keyring.o 00:04:14.679 CC module/keyring/linux/keyring.o 00:04:14.679 CC module/blob/bdev/blob_bdev.o 00:04:14.679 CC module/scheduler/gscheduler/gscheduler.o 00:04:14.679 CC module/sock/posix/posix.o 00:04:14.679 CC module/accel/ioat/accel_ioat.o 00:04:14.679 CC module/accel/error/accel_error.o 00:04:14.679 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:14.679 LIB libspdk_env_dpdk_rpc.a 00:04:14.937 SO libspdk_env_dpdk_rpc.so.6.0 00:04:14.937 SYMLINK libspdk_env_dpdk_rpc.so 00:04:14.937 CC module/accel/error/accel_error_rpc.o 00:04:14.937 CC module/keyring/file/keyring_rpc.o 00:04:14.937 LIB libspdk_scheduler_gscheduler.a 00:04:14.937 LIB libspdk_scheduler_dpdk_governor.a 00:04:14.937 CC module/keyring/linux/keyring_rpc.o 00:04:14.937 SO libspdk_scheduler_gscheduler.so.4.0 00:04:14.937 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:15.196 LIB libspdk_scheduler_dynamic.a 00:04:15.196 CC module/accel/ioat/accel_ioat_rpc.o 00:04:15.196 SYMLINK libspdk_scheduler_gscheduler.so 00:04:15.196 SO libspdk_scheduler_dynamic.so.4.0 00:04:15.196 LIB libspdk_accel_error.a 00:04:15.196 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:15.196 SO libspdk_accel_error.so.2.0 00:04:15.196 LIB libspdk_keyring_file.a 00:04:15.196 SYMLINK libspdk_scheduler_dynamic.so 00:04:15.196 LIB libspdk_keyring_linux.a 00:04:15.196 LIB libspdk_blob_bdev.a 00:04:15.196 SO libspdk_keyring_file.so.1.0 00:04:15.196 SYMLINK libspdk_accel_error.so 00:04:15.196 SO libspdk_blob_bdev.so.11.0 00:04:15.196 SO libspdk_keyring_linux.so.1.0 00:04:15.196 LIB libspdk_accel_ioat.a 00:04:15.454 CC module/accel/dsa/accel_dsa.o 00:04:15.454 CC module/accel/dsa/accel_dsa_rpc.o 00:04:15.454 SO libspdk_accel_ioat.so.6.0 00:04:15.454 SYMLINK libspdk_keyring_file.so 00:04:15.454 SYMLINK libspdk_blob_bdev.so 00:04:15.454 SYMLINK libspdk_keyring_linux.so 00:04:15.454 CC module/accel/iaa/accel_iaa.o 00:04:15.455 CC module/accel/iaa/accel_iaa_rpc.o 00:04:15.455 SYMLINK libspdk_accel_ioat.so 00:04:15.712 CC module/bdev/delay/vbdev_delay.o 00:04:15.712 LIB libspdk_accel_iaa.a 00:04:15.970 CC module/bdev/lvol/vbdev_lvol.o 00:04:15.970 CC module/bdev/gpt/gpt.o 00:04:15.970 CC module/blobfs/bdev/blobfs_bdev.o 00:04:15.970 CC module/bdev/error/vbdev_error.o 00:04:15.970 CC module/bdev/malloc/bdev_malloc.o 00:04:15.970 LIB libspdk_accel_dsa.a 00:04:15.970 SO libspdk_accel_iaa.so.3.0 00:04:15.970 CC module/bdev/null/bdev_null.o 00:04:15.970 SO libspdk_accel_dsa.so.5.0 00:04:15.970 SYMLINK libspdk_accel_iaa.so 00:04:15.970 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:15.970 SYMLINK libspdk_accel_dsa.so 00:04:15.970 CC module/bdev/error/vbdev_error_rpc.o 00:04:16.228 CC module/bdev/gpt/vbdev_gpt.o 00:04:16.228 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:16.228 LIB libspdk_sock_posix.a 00:04:16.486 LIB libspdk_blobfs_bdev.a 00:04:16.486 SO libspdk_sock_posix.so.6.0 00:04:16.486 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:16.486 SO libspdk_blobfs_bdev.so.6.0 00:04:16.486 LIB libspdk_bdev_error.a 00:04:16.486 CC module/bdev/null/bdev_null_rpc.o 00:04:16.486 SO libspdk_bdev_error.so.6.0 00:04:16.486 SYMLINK libspdk_blobfs_bdev.so 00:04:16.486 SYMLINK libspdk_sock_posix.so 00:04:16.486 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:16.486 LIB libspdk_bdev_malloc.a 00:04:16.486 SYMLINK libspdk_bdev_error.so 00:04:16.486 SO libspdk_bdev_malloc.so.6.0 00:04:16.747 LIB libspdk_bdev_gpt.a 00:04:16.747 SO libspdk_bdev_gpt.so.6.0 00:04:16.747 SYMLINK libspdk_bdev_malloc.so 00:04:16.747 LIB libspdk_bdev_delay.a 00:04:16.747 CC module/bdev/passthru/vbdev_passthru.o 00:04:16.747 CC module/bdev/raid/bdev_raid.o 00:04:16.747 CC module/bdev/nvme/bdev_nvme.o 00:04:16.747 SO libspdk_bdev_delay.so.6.0 00:04:16.747 SYMLINK libspdk_bdev_gpt.so 00:04:16.747 CC module/bdev/split/vbdev_split.o 00:04:16.747 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:16.747 LIB libspdk_bdev_null.a 00:04:16.747 SO libspdk_bdev_null.so.6.0 00:04:17.005 SYMLINK libspdk_bdev_delay.so 00:04:17.005 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:17.005 SYMLINK libspdk_bdev_null.so 00:04:17.005 CC module/bdev/nvme/nvme_rpc.o 00:04:17.005 CC module/bdev/xnvme/bdev_xnvme.o 00:04:17.005 CC module/bdev/aio/bdev_aio.o 00:04:17.005 CC module/bdev/split/vbdev_split_rpc.o 00:04:17.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:17.263 LIB libspdk_bdev_lvol.a 00:04:17.263 SO libspdk_bdev_lvol.so.6.0 00:04:17.263 LIB libspdk_bdev_split.a 00:04:17.263 LIB libspdk_bdev_passthru.a 00:04:17.263 SYMLINK libspdk_bdev_lvol.so 00:04:17.263 CC module/bdev/aio/bdev_aio_rpc.o 00:04:17.263 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:17.263 SO libspdk_bdev_split.so.6.0 00:04:17.263 SO libspdk_bdev_passthru.so.6.0 00:04:17.263 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:17.522 CC module/bdev/nvme/bdev_mdns_client.o 00:04:17.522 SYMLINK libspdk_bdev_split.so 00:04:17.522 CC module/bdev/nvme/vbdev_opal.o 00:04:17.522 SYMLINK libspdk_bdev_passthru.so 00:04:17.522 CC module/bdev/raid/bdev_raid_rpc.o 00:04:17.522 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:17.522 LIB libspdk_bdev_aio.a 00:04:17.522 LIB libspdk_bdev_zone_block.a 00:04:17.522 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:17.522 SO libspdk_bdev_zone_block.so.6.0 00:04:17.522 CC module/bdev/raid/bdev_raid_sb.o 00:04:17.522 SO libspdk_bdev_aio.so.6.0 00:04:17.522 LIB libspdk_bdev_xnvme.a 00:04:17.781 SO libspdk_bdev_xnvme.so.3.0 00:04:17.781 SYMLINK libspdk_bdev_zone_block.so 00:04:17.781 CC module/bdev/raid/raid0.o 00:04:17.781 SYMLINK libspdk_bdev_aio.so 00:04:17.781 CC module/bdev/raid/raid1.o 00:04:17.781 CC module/bdev/raid/concat.o 00:04:17.781 SYMLINK libspdk_bdev_xnvme.so 00:04:17.781 CC module/bdev/ftl/bdev_ftl.o 00:04:17.781 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:18.039 CC module/bdev/iscsi/bdev_iscsi.o 00:04:18.039 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:18.039 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:18.039 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:18.039 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:18.297 LIB libspdk_bdev_raid.a 00:04:18.297 LIB libspdk_bdev_ftl.a 00:04:18.297 SO libspdk_bdev_ftl.so.6.0 00:04:18.297 SO libspdk_bdev_raid.so.6.0 00:04:18.297 LIB libspdk_bdev_iscsi.a 00:04:18.297 SYMLINK libspdk_bdev_ftl.so 00:04:18.297 SO libspdk_bdev_iscsi.so.6.0 00:04:18.604 SYMLINK libspdk_bdev_raid.so 00:04:18.604 SYMLINK libspdk_bdev_iscsi.so 00:04:18.604 LIB libspdk_bdev_virtio.a 00:04:18.604 SO libspdk_bdev_virtio.so.6.0 00:04:18.863 SYMLINK libspdk_bdev_virtio.so 00:04:20.238 LIB libspdk_bdev_nvme.a 00:04:20.238 SO libspdk_bdev_nvme.so.7.0 00:04:20.238 SYMLINK libspdk_bdev_nvme.so 00:04:20.803 CC module/event/subsystems/iobuf/iobuf.o 00:04:20.803 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:20.803 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:20.803 CC module/event/subsystems/keyring/keyring.o 00:04:20.803 CC module/event/subsystems/vmd/vmd.o 00:04:20.803 CC module/event/subsystems/scheduler/scheduler.o 00:04:20.803 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:20.803 CC module/event/subsystems/sock/sock.o 00:04:20.803 LIB libspdk_event_vhost_blk.a 00:04:20.803 LIB libspdk_event_scheduler.a 00:04:20.803 LIB libspdk_event_keyring.a 00:04:20.803 LIB libspdk_event_iobuf.a 00:04:20.803 SO libspdk_event_vhost_blk.so.3.0 00:04:20.803 SO libspdk_event_scheduler.so.4.0 00:04:20.803 SO libspdk_event_keyring.so.1.0 00:04:20.803 LIB libspdk_event_sock.a 00:04:20.803 SO libspdk_event_iobuf.so.3.0 00:04:20.803 LIB libspdk_event_vmd.a 00:04:20.803 SO libspdk_event_sock.so.5.0 00:04:21.062 SYMLINK libspdk_event_vhost_blk.so 00:04:21.062 SO libspdk_event_vmd.so.6.0 00:04:21.062 SYMLINK libspdk_event_scheduler.so 00:04:21.062 SYMLINK libspdk_event_keyring.so 00:04:21.062 SYMLINK libspdk_event_sock.so 00:04:21.062 SYMLINK libspdk_event_iobuf.so 00:04:21.062 SYMLINK libspdk_event_vmd.so 00:04:21.321 CC module/event/subsystems/accel/accel.o 00:04:21.321 LIB libspdk_event_accel.a 00:04:21.579 SO libspdk_event_accel.so.6.0 00:04:21.579 SYMLINK libspdk_event_accel.so 00:04:21.837 CC module/event/subsystems/bdev/bdev.o 00:04:22.095 LIB libspdk_event_bdev.a 00:04:22.095 SO libspdk_event_bdev.so.6.0 00:04:22.095 SYMLINK libspdk_event_bdev.so 00:04:22.353 CC module/event/subsystems/scsi/scsi.o 00:04:22.353 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:22.353 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:22.353 CC module/event/subsystems/ublk/ublk.o 00:04:22.353 CC module/event/subsystems/nbd/nbd.o 00:04:22.609 LIB libspdk_event_ublk.a 00:04:22.609 LIB libspdk_event_nbd.a 00:04:22.609 SO libspdk_event_ublk.so.3.0 00:04:22.609 LIB libspdk_event_scsi.a 00:04:22.609 SO libspdk_event_nbd.so.6.0 00:04:22.609 SO libspdk_event_scsi.so.6.0 00:04:22.609 SYMLINK libspdk_event_ublk.so 00:04:22.609 SYMLINK libspdk_event_scsi.so 00:04:22.609 SYMLINK libspdk_event_nbd.so 00:04:22.609 LIB libspdk_event_nvmf.a 00:04:22.609 SO libspdk_event_nvmf.so.6.0 00:04:22.867 SYMLINK libspdk_event_nvmf.so 00:04:22.867 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:22.867 CC module/event/subsystems/iscsi/iscsi.o 00:04:23.125 LIB libspdk_event_vhost_scsi.a 00:04:23.125 SO libspdk_event_vhost_scsi.so.3.0 00:04:23.125 LIB libspdk_event_iscsi.a 00:04:23.125 SYMLINK libspdk_event_vhost_scsi.so 00:04:23.125 SO libspdk_event_iscsi.so.6.0 00:04:23.384 SYMLINK libspdk_event_iscsi.so 00:04:23.384 SO libspdk.so.6.0 00:04:23.384 SYMLINK libspdk.so 00:04:23.643 TEST_HEADER include/spdk/accel.h 00:04:23.643 CC test/rpc_client/rpc_client_test.o 00:04:23.643 CC app/trace_record/trace_record.o 00:04:23.643 TEST_HEADER include/spdk/accel_module.h 00:04:23.643 TEST_HEADER include/spdk/assert.h 00:04:23.643 CXX app/trace/trace.o 00:04:23.643 TEST_HEADER include/spdk/barrier.h 00:04:23.643 TEST_HEADER include/spdk/base64.h 00:04:23.643 TEST_HEADER include/spdk/bdev.h 00:04:23.643 TEST_HEADER include/spdk/bdev_module.h 00:04:23.643 TEST_HEADER include/spdk/bdev_zone.h 00:04:23.643 TEST_HEADER include/spdk/bit_array.h 00:04:23.643 TEST_HEADER include/spdk/bit_pool.h 00:04:23.643 TEST_HEADER include/spdk/blob_bdev.h 00:04:23.643 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:23.643 TEST_HEADER include/spdk/blobfs.h 00:04:23.643 TEST_HEADER include/spdk/blob.h 00:04:23.643 TEST_HEADER include/spdk/conf.h 00:04:23.643 TEST_HEADER include/spdk/config.h 00:04:23.643 TEST_HEADER include/spdk/cpuset.h 00:04:23.643 TEST_HEADER include/spdk/crc16.h 00:04:23.643 TEST_HEADER include/spdk/crc32.h 00:04:23.643 TEST_HEADER include/spdk/crc64.h 00:04:23.916 TEST_HEADER include/spdk/dif.h 00:04:23.916 TEST_HEADER include/spdk/dma.h 00:04:23.916 TEST_HEADER include/spdk/endian.h 00:04:23.916 TEST_HEADER include/spdk/env_dpdk.h 00:04:23.916 TEST_HEADER include/spdk/env.h 00:04:23.916 TEST_HEADER include/spdk/event.h 00:04:23.916 TEST_HEADER include/spdk/fd_group.h 00:04:23.916 TEST_HEADER include/spdk/fd.h 00:04:23.916 TEST_HEADER include/spdk/file.h 00:04:23.916 TEST_HEADER include/spdk/ftl.h 00:04:23.916 TEST_HEADER include/spdk/gpt_spec.h 00:04:23.916 TEST_HEADER include/spdk/hexlify.h 00:04:23.916 TEST_HEADER include/spdk/histogram_data.h 00:04:23.916 TEST_HEADER include/spdk/idxd.h 00:04:23.916 TEST_HEADER include/spdk/idxd_spec.h 00:04:23.916 TEST_HEADER include/spdk/init.h 00:04:23.916 TEST_HEADER include/spdk/ioat.h 00:04:23.916 TEST_HEADER include/spdk/ioat_spec.h 00:04:23.916 TEST_HEADER include/spdk/iscsi_spec.h 00:04:23.916 CC examples/util/zipf/zipf.o 00:04:23.916 TEST_HEADER include/spdk/json.h 00:04:23.916 TEST_HEADER include/spdk/jsonrpc.h 00:04:23.916 TEST_HEADER include/spdk/keyring.h 00:04:23.916 TEST_HEADER include/spdk/keyring_module.h 00:04:23.916 TEST_HEADER include/spdk/likely.h 00:04:23.916 TEST_HEADER include/spdk/log.h 00:04:23.916 CC examples/ioat/perf/perf.o 00:04:23.916 TEST_HEADER include/spdk/lvol.h 00:04:23.916 CC test/thread/poller_perf/poller_perf.o 00:04:23.916 TEST_HEADER include/spdk/memory.h 00:04:23.916 TEST_HEADER include/spdk/mmio.h 00:04:23.916 CC test/app/bdev_svc/bdev_svc.o 00:04:23.916 TEST_HEADER include/spdk/nbd.h 00:04:23.916 TEST_HEADER include/spdk/notify.h 00:04:23.916 CC test/dma/test_dma/test_dma.o 00:04:23.916 TEST_HEADER include/spdk/nvme.h 00:04:23.916 TEST_HEADER include/spdk/nvme_intel.h 00:04:23.916 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:23.916 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:23.916 TEST_HEADER include/spdk/nvme_spec.h 00:04:23.916 TEST_HEADER include/spdk/nvme_zns.h 00:04:23.916 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:23.916 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:23.916 TEST_HEADER include/spdk/nvmf.h 00:04:23.916 TEST_HEADER include/spdk/nvmf_spec.h 00:04:23.916 TEST_HEADER include/spdk/nvmf_transport.h 00:04:23.916 TEST_HEADER include/spdk/opal.h 00:04:23.916 TEST_HEADER include/spdk/opal_spec.h 00:04:23.916 TEST_HEADER include/spdk/pci_ids.h 00:04:23.916 TEST_HEADER include/spdk/pipe.h 00:04:23.916 TEST_HEADER include/spdk/queue.h 00:04:23.916 TEST_HEADER include/spdk/reduce.h 00:04:23.916 TEST_HEADER include/spdk/rpc.h 00:04:23.916 TEST_HEADER include/spdk/scheduler.h 00:04:23.916 TEST_HEADER include/spdk/scsi.h 00:04:23.916 TEST_HEADER include/spdk/scsi_spec.h 00:04:24.189 TEST_HEADER include/spdk/sock.h 00:04:24.189 TEST_HEADER include/spdk/stdinc.h 00:04:24.189 TEST_HEADER include/spdk/string.h 00:04:24.189 TEST_HEADER include/spdk/thread.h 00:04:24.189 TEST_HEADER include/spdk/trace.h 00:04:24.189 TEST_HEADER include/spdk/trace_parser.h 00:04:24.189 TEST_HEADER include/spdk/tree.h 00:04:24.189 TEST_HEADER include/spdk/ublk.h 00:04:24.189 CC test/env/mem_callbacks/mem_callbacks.o 00:04:24.189 TEST_HEADER include/spdk/util.h 00:04:24.189 LINK rpc_client_test 00:04:24.189 TEST_HEADER include/spdk/uuid.h 00:04:24.189 TEST_HEADER include/spdk/version.h 00:04:24.189 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:24.189 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:24.189 TEST_HEADER include/spdk/vhost.h 00:04:24.189 TEST_HEADER include/spdk/vmd.h 00:04:24.189 TEST_HEADER include/spdk/xor.h 00:04:24.189 TEST_HEADER include/spdk/zipf.h 00:04:24.189 CXX test/cpp_headers/accel.o 00:04:24.189 LINK zipf 00:04:24.189 LINK poller_perf 00:04:24.189 LINK ioat_perf 00:04:24.189 LINK spdk_trace_record 00:04:24.189 LINK bdev_svc 00:04:24.189 CXX test/cpp_headers/accel_module.o 00:04:24.447 LINK test_dma 00:04:24.447 LINK spdk_trace 00:04:24.447 CC examples/ioat/verify/verify.o 00:04:24.447 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:24.706 CC test/env/vtophys/vtophys.o 00:04:24.706 CXX test/cpp_headers/assert.o 00:04:24.706 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:24.706 CC app/nvmf_tgt/nvmf_main.o 00:04:24.964 LINK interrupt_tgt 00:04:24.964 LINK vtophys 00:04:24.964 CXX test/cpp_headers/barrier.o 00:04:24.964 LINK verify 00:04:24.964 CC examples/thread/thread/thread_ex.o 00:04:24.964 CC examples/sock/hello_world/hello_sock.o 00:04:24.964 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:25.221 LINK nvmf_tgt 00:04:25.221 LINK mem_callbacks 00:04:25.221 CXX test/cpp_headers/base64.o 00:04:25.221 CXX test/cpp_headers/bdev.o 00:04:25.221 LINK env_dpdk_post_init 00:04:25.221 CC test/app/histogram_perf/histogram_perf.o 00:04:25.479 CC test/app/jsoncat/jsoncat.o 00:04:25.479 LINK thread 00:04:25.479 LINK nvme_fuzz 00:04:25.479 LINK hello_sock 00:04:25.737 LINK histogram_perf 00:04:25.737 LINK jsoncat 00:04:25.737 CXX test/cpp_headers/bdev_module.o 00:04:25.737 CC app/iscsi_tgt/iscsi_tgt.o 00:04:25.737 CC test/env/memory/memory_ut.o 00:04:25.737 CC app/spdk_tgt/spdk_tgt.o 00:04:25.737 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:25.737 CC examples/vmd/lsvmd/lsvmd.o 00:04:25.995 CC test/env/pci/pci_ut.o 00:04:25.995 CC app/spdk_lspci/spdk_lspci.o 00:04:25.995 CXX test/cpp_headers/bdev_zone.o 00:04:25.995 CC examples/vmd/led/led.o 00:04:25.995 LINK lsvmd 00:04:25.995 LINK iscsi_tgt 00:04:26.253 LINK spdk_lspci 00:04:26.253 CC test/event/event_perf/event_perf.o 00:04:26.253 LINK spdk_tgt 00:04:26.253 LINK led 00:04:26.513 CXX test/cpp_headers/bit_array.o 00:04:26.513 LINK event_perf 00:04:26.513 CC test/event/reactor/reactor.o 00:04:26.771 CC test/event/reactor_perf/reactor_perf.o 00:04:26.771 CXX test/cpp_headers/bit_pool.o 00:04:26.771 CC test/event/app_repeat/app_repeat.o 00:04:26.771 LINK pci_ut 00:04:26.771 LINK reactor 00:04:26.771 CC app/spdk_nvme_perf/perf.o 00:04:27.029 LINK reactor_perf 00:04:27.029 CC examples/idxd/perf/perf.o 00:04:27.029 CXX test/cpp_headers/blob_bdev.o 00:04:27.029 LINK app_repeat 00:04:27.029 CC test/event/scheduler/scheduler.o 00:04:27.288 CXX test/cpp_headers/blobfs_bdev.o 00:04:27.288 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.288 CXX test/cpp_headers/blobfs.o 00:04:27.546 CC test/app/stub/stub.o 00:04:27.546 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:27.546 LINK scheduler 00:04:27.546 CC test/nvme/aer/aer.o 00:04:27.804 LINK idxd_perf 00:04:27.804 CXX test/cpp_headers/blob.o 00:04:27.804 LINK stub 00:04:27.804 CC test/nvme/reset/reset.o 00:04:28.062 CXX test/cpp_headers/conf.o 00:04:28.062 LINK memory_ut 00:04:28.062 LINK aer 00:04:28.062 CC app/spdk_nvme_identify/identify.o 00:04:28.321 CC app/spdk_nvme_discover/discovery_aer.o 00:04:28.321 CC examples/accel/perf/accel_perf.o 00:04:28.321 LINK reset 00:04:28.321 LINK vhost_fuzz 00:04:28.321 CXX test/cpp_headers/config.o 00:04:28.602 CXX test/cpp_headers/cpuset.o 00:04:28.602 LINK spdk_nvme_discover 00:04:28.864 CXX test/cpp_headers/crc16.o 00:04:28.864 LINK spdk_nvme_perf 00:04:28.864 CC test/nvme/sgl/sgl.o 00:04:28.864 CC test/accel/dif/dif.o 00:04:28.864 CC test/blobfs/mkfs/mkfs.o 00:04:28.864 CC test/nvme/e2edp/nvme_dp.o 00:04:29.123 CXX test/cpp_headers/crc32.o 00:04:29.123 CC test/nvme/overhead/overhead.o 00:04:29.123 CC test/nvme/err_injection/err_injection.o 00:04:29.382 CXX test/cpp_headers/crc64.o 00:04:29.382 LINK mkfs 00:04:29.382 LINK sgl 00:04:29.641 LINK nvme_dp 00:04:29.641 LINK accel_perf 00:04:29.641 CXX test/cpp_headers/dif.o 00:04:29.641 CXX test/cpp_headers/dma.o 00:04:29.641 LINK err_injection 00:04:29.641 LINK overhead 00:04:29.641 LINK iscsi_fuzz 00:04:29.899 LINK dif 00:04:29.899 CXX test/cpp_headers/endian.o 00:04:29.899 CC test/nvme/startup/startup.o 00:04:29.899 CC test/nvme/reserve/reserve.o 00:04:30.157 CC test/nvme/simple_copy/simple_copy.o 00:04:30.157 CXX test/cpp_headers/env_dpdk.o 00:04:30.157 CC test/lvol/esnap/esnap.o 00:04:30.157 CC examples/blob/hello_world/hello_blob.o 00:04:30.157 LINK startup 00:04:30.157 LINK spdk_nvme_identify 00:04:30.157 CC examples/blob/cli/blobcli.o 00:04:30.415 CC app/spdk_top/spdk_top.o 00:04:30.415 CC examples/nvme/hello_world/hello_world.o 00:04:30.415 LINK reserve 00:04:30.415 CXX test/cpp_headers/env.o 00:04:30.415 LINK simple_copy 00:04:30.415 CXX test/cpp_headers/event.o 00:04:30.673 CXX test/cpp_headers/fd_group.o 00:04:30.673 LINK hello_blob 00:04:30.673 LINK hello_world 00:04:30.673 CXX test/cpp_headers/fd.o 00:04:30.931 CXX test/cpp_headers/file.o 00:04:30.931 CC examples/nvme/reconnect/reconnect.o 00:04:30.931 CC test/bdev/bdevio/bdevio.o 00:04:30.931 CXX test/cpp_headers/ftl.o 00:04:30.931 CC test/nvme/connect_stress/connect_stress.o 00:04:31.189 CXX test/cpp_headers/gpt_spec.o 00:04:31.446 CC app/vhost/vhost.o 00:04:31.446 LINK connect_stress 00:04:31.446 LINK blobcli 00:04:31.446 CC app/spdk_dd/spdk_dd.o 00:04:31.446 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.446 LINK reconnect 00:04:31.446 CXX test/cpp_headers/hexlify.o 00:04:31.705 LINK vhost 00:04:31.705 LINK spdk_top 00:04:31.705 CC test/nvme/boot_partition/boot_partition.o 00:04:31.705 LINK bdevio 00:04:31.705 CXX test/cpp_headers/histogram_data.o 00:04:31.964 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:31.964 CC examples/nvme/arbitration/arbitration.o 00:04:31.964 LINK hello_bdev 00:04:31.964 LINK spdk_dd 00:04:31.964 CC examples/nvme/hotplug/hotplug.o 00:04:32.222 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:32.222 CXX test/cpp_headers/idxd.o 00:04:32.222 LINK boot_partition 00:04:32.222 CC examples/nvme/abort/abort.o 00:04:32.612 LINK cmb_copy 00:04:32.612 CXX test/cpp_headers/idxd_spec.o 00:04:32.612 CC app/fio/nvme/fio_plugin.o 00:04:32.612 LINK hotplug 00:04:32.612 LINK arbitration 00:04:32.612 CC examples/bdev/bdevperf/bdevperf.o 00:04:32.612 CC test/nvme/compliance/nvme_compliance.o 00:04:32.893 CXX test/cpp_headers/init.o 00:04:32.893 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:32.893 CC app/fio/bdev/fio_plugin.o 00:04:32.893 LINK nvme_manage 00:04:33.151 CXX test/cpp_headers/ioat.o 00:04:33.151 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.151 LINK abort 00:04:33.151 LINK pmr_persistence 00:04:33.151 LINK nvme_compliance 00:04:33.409 CXX test/cpp_headers/ioat_spec.o 00:04:33.409 CXX test/cpp_headers/iscsi_spec.o 00:04:33.409 CXX test/cpp_headers/json.o 00:04:33.409 LINK fused_ordering 00:04:33.409 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:33.667 CXX test/cpp_headers/jsonrpc.o 00:04:33.667 CXX test/cpp_headers/keyring.o 00:04:33.667 CXX test/cpp_headers/keyring_module.o 00:04:33.667 LINK spdk_nvme 00:04:33.925 CXX test/cpp_headers/likely.o 00:04:33.925 LINK spdk_bdev 00:04:33.925 CXX test/cpp_headers/log.o 00:04:33.925 LINK doorbell_aers 00:04:33.925 CC test/nvme/fdp/fdp.o 00:04:33.925 CXX test/cpp_headers/lvol.o 00:04:33.925 CXX test/cpp_headers/memory.o 00:04:34.184 CXX test/cpp_headers/mmio.o 00:04:34.184 CC test/nvme/cuse/cuse.o 00:04:34.184 CXX test/cpp_headers/nbd.o 00:04:34.184 CXX test/cpp_headers/notify.o 00:04:34.184 CXX test/cpp_headers/nvme.o 00:04:34.184 CXX test/cpp_headers/nvme_intel.o 00:04:34.184 CXX test/cpp_headers/nvme_ocssd.o 00:04:34.184 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:34.184 CXX test/cpp_headers/nvme_spec.o 00:04:34.442 LINK bdevperf 00:04:34.442 CXX test/cpp_headers/nvme_zns.o 00:04:34.442 CXX test/cpp_headers/nvmf_cmd.o 00:04:34.442 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:34.442 CXX test/cpp_headers/nvmf.o 00:04:34.442 CXX test/cpp_headers/nvmf_spec.o 00:04:34.442 CXX test/cpp_headers/nvmf_transport.o 00:04:34.442 LINK fdp 00:04:34.700 CXX test/cpp_headers/opal.o 00:04:34.700 CXX test/cpp_headers/opal_spec.o 00:04:34.700 CXX test/cpp_headers/pci_ids.o 00:04:34.700 CXX test/cpp_headers/pipe.o 00:04:34.700 CXX test/cpp_headers/queue.o 00:04:34.700 CXX test/cpp_headers/reduce.o 00:04:34.700 CXX test/cpp_headers/rpc.o 00:04:34.958 CXX test/cpp_headers/scheduler.o 00:04:34.958 CXX test/cpp_headers/scsi.o 00:04:34.958 CXX test/cpp_headers/scsi_spec.o 00:04:34.958 CXX test/cpp_headers/sock.o 00:04:34.958 CC examples/nvmf/nvmf/nvmf.o 00:04:34.958 CXX test/cpp_headers/stdinc.o 00:04:34.958 CXX test/cpp_headers/string.o 00:04:34.958 CXX test/cpp_headers/thread.o 00:04:35.215 CXX test/cpp_headers/trace.o 00:04:35.215 CXX test/cpp_headers/trace_parser.o 00:04:35.215 CXX test/cpp_headers/tree.o 00:04:35.215 CXX test/cpp_headers/ublk.o 00:04:35.215 CXX test/cpp_headers/util.o 00:04:35.215 CXX test/cpp_headers/uuid.o 00:04:35.215 CXX test/cpp_headers/version.o 00:04:35.215 CXX test/cpp_headers/vfio_user_pci.o 00:04:35.215 CXX test/cpp_headers/vfio_user_spec.o 00:04:35.215 CXX test/cpp_headers/vhost.o 00:04:35.215 LINK nvmf 00:04:35.471 CXX test/cpp_headers/vmd.o 00:04:35.471 CXX test/cpp_headers/xor.o 00:04:35.471 CXX test/cpp_headers/zipf.o 00:04:36.036 LINK cuse 00:04:39.319 LINK esnap 00:04:39.577 00:04:39.577 real 1m48.021s 00:04:39.577 user 11m31.073s 00:04:39.577 sys 2m7.381s 00:04:39.577 09:10:25 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:39.577 09:10:25 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.577 ************************************ 00:04:39.577 END TEST make 00:04:39.577 ************************************ 00:04:39.577 09:10:25 -- common/autotest_common.sh@1142 -- $ return 0 00:04:39.577 09:10:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.577 09:10:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.577 09:10:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.578 09:10:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.578 09:10:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.578 09:10:25 -- pm/common@44 -- $ pid=5239 00:04:39.578 09:10:25 -- pm/common@50 -- $ kill -TERM 5239 00:04:39.578 09:10:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.578 09:10:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.578 09:10:25 -- pm/common@44 -- $ pid=5241 00:04:39.578 09:10:25 -- pm/common@50 -- $ kill -TERM 5241 00:04:39.578 09:10:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.578 09:10:25 -- nvmf/common.sh@7 -- # uname -s 00:04:39.578 09:10:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.578 09:10:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.578 09:10:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.578 09:10:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.578 09:10:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.578 09:10:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.578 09:10:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.578 09:10:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.578 09:10:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.578 09:10:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.578 09:10:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a76d6b3-a7b4-4a82-9516-c6e99e966a66 00:04:39.578 09:10:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=1a76d6b3-a7b4-4a82-9516-c6e99e966a66 00:04:39.578 09:10:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.578 09:10:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.578 09:10:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.578 09:10:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.578 09:10:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.578 09:10:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.578 09:10:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.578 09:10:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.578 09:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.578 09:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.578 09:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.578 09:10:25 -- paths/export.sh@5 -- # export PATH 00:04:39.578 09:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.578 09:10:25 -- nvmf/common.sh@47 -- # : 0 00:04:39.578 09:10:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.578 09:10:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.578 09:10:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.578 09:10:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.578 09:10:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.578 09:10:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.578 09:10:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.578 09:10:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.578 09:10:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.578 09:10:25 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.578 09:10:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.578 09:10:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:39.578 09:10:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.578 09:10:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.578 09:10:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.578 09:10:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.836 09:10:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.836 09:10:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:39.836 09:10:25 -- spdk/autotest.sh@48 -- # udevadm_pid=54078 00:04:39.836 09:10:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.836 09:10:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:39.836 09:10:25 -- pm/common@17 -- # local monitor 00:04:39.836 09:10:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.836 09:10:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.836 09:10:25 -- pm/common@25 -- # sleep 1 00:04:39.836 09:10:25 -- pm/common@21 -- # date +%s 00:04:39.836 09:10:25 -- pm/common@21 -- # date +%s 00:04:39.836 09:10:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720775425 00:04:39.836 09:10:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720775425 00:04:39.836 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720775425_collect-vmstat.pm.log 00:04:39.836 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720775425_collect-cpu-load.pm.log 00:04:40.794 09:10:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.794 09:10:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.794 09:10:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.794 09:10:26 -- common/autotest_common.sh@10 -- # set +x 00:04:40.794 09:10:26 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.794 09:10:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:40.794 09:10:26 -- common/autotest_common.sh@10 -- # set +x 00:04:40.794 09:10:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:40.794 09:10:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:40.794 09:10:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:40.794 09:10:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:40.794 09:10:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:40.794 09:10:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.794 09:10:27 -- common/autotest_common.sh@1455 -- # uname 00:04:40.794 09:10:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:40.794 09:10:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.794 09:10:27 -- common/autotest_common.sh@1475 -- # uname 00:04:40.794 09:10:27 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:40.794 09:10:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:40.794 09:10:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:40.794 09:10:27 -- spdk/autotest.sh@72 -- # hash lcov 00:04:40.794 09:10:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:40.794 09:10:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:40.794 --rc lcov_branch_coverage=1 00:04:40.794 --rc lcov_function_coverage=1 00:04:40.794 --rc genhtml_branch_coverage=1 00:04:40.794 --rc genhtml_function_coverage=1 00:04:40.794 --rc genhtml_legend=1 00:04:40.794 --rc geninfo_all_blocks=1 00:04:40.794 ' 00:04:40.794 09:10:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:40.794 --rc lcov_branch_coverage=1 00:04:40.794 --rc lcov_function_coverage=1 00:04:40.794 --rc genhtml_branch_coverage=1 00:04:40.794 --rc genhtml_function_coverage=1 00:04:40.794 --rc genhtml_legend=1 00:04:40.794 --rc geninfo_all_blocks=1 00:04:40.794 ' 00:04:40.794 09:10:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:40.794 --rc lcov_branch_coverage=1 00:04:40.794 --rc lcov_function_coverage=1 00:04:40.794 --rc genhtml_branch_coverage=1 00:04:40.794 --rc genhtml_function_coverage=1 00:04:40.794 --rc genhtml_legend=1 00:04:40.794 --rc geninfo_all_blocks=1 00:04:40.794 --no-external' 00:04:40.794 09:10:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:40.794 --rc lcov_branch_coverage=1 00:04:40.794 --rc lcov_function_coverage=1 00:04:40.794 --rc genhtml_branch_coverage=1 00:04:40.794 --rc genhtml_function_coverage=1 00:04:40.794 --rc genhtml_legend=1 00:04:40.794 --rc geninfo_all_blocks=1 00:04:40.794 --no-external' 00:04:40.794 09:10:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:40.794 lcov: LCOV version 1.14 00:04:40.794 09:10:27 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:58.873 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:58.873 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:11.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:11.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:11.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:14.378 09:11:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:14.378 09:11:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.378 09:11:00 -- common/autotest_common.sh@10 -- # set +x 00:05:14.378 09:11:00 -- spdk/autotest.sh@91 -- # rm -f 00:05:14.378 09:11:00 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.946 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:14.946 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:14.946 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:14.946 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:14.946 09:11:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:14.946 09:11:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:14.946 09:11:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:14.946 09:11:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.946 09:11:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:14.946 09:11:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:14.946 09:11:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.946 09:11:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:14.946 09:11:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.946 09:11:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:14.946 09:11:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:14.946 09:11:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:14.946 09:11:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:14.946 No valid GPT data, bailing 00:05:14.946 09:11:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.204 09:11:01 -- scripts/common.sh@391 -- # pt= 00:05:15.204 09:11:01 -- scripts/common.sh@392 -- # return 1 00:05:15.204 09:11:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.204 1+0 records in 00:05:15.204 1+0 records out 00:05:15.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136694 s, 76.7 MB/s 00:05:15.205 09:11:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.205 09:11:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:15.205 09:11:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:15.205 09:11:01 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:15.205 09:11:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:15.205 No valid GPT data, bailing 00:05:15.205 09:11:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:15.205 09:11:01 -- scripts/common.sh@391 -- # pt= 00:05:15.205 09:11:01 -- scripts/common.sh@392 -- # return 1 00:05:15.205 09:11:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:15.205 1+0 records in 00:05:15.205 1+0 records out 00:05:15.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430007 s, 244 MB/s 00:05:15.205 09:11:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.205 09:11:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:15.205 09:11:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:15.205 09:11:01 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:15.205 09:11:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:15.205 No valid GPT data, bailing 00:05:15.205 09:11:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:15.205 09:11:01 -- scripts/common.sh@391 -- # pt= 00:05:15.205 09:11:01 -- scripts/common.sh@392 -- # return 1 00:05:15.205 09:11:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:15.205 1+0 records in 00:05:15.205 1+0 records out 00:05:15.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048273 s, 217 MB/s 00:05:15.205 09:11:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.205 09:11:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:15.205 09:11:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:15.205 09:11:01 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:15.205 09:11:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:15.205 No valid GPT data, bailing 00:05:15.205 09:11:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:15.205 09:11:01 -- scripts/common.sh@391 -- # pt= 00:05:15.205 09:11:01 -- scripts/common.sh@392 -- # return 1 00:05:15.205 09:11:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:15.463 1+0 records in 00:05:15.463 1+0 records out 00:05:15.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444262 s, 236 MB/s 00:05:15.463 09:11:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.463 09:11:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:15.463 09:11:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:15.463 09:11:01 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:15.463 09:11:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:15.463 No valid GPT data, bailing 00:05:15.463 09:11:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:15.463 09:11:01 -- scripts/common.sh@391 -- # pt= 00:05:15.463 09:11:01 -- scripts/common.sh@392 -- # return 1 00:05:15.463 09:11:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:15.463 1+0 records in 00:05:15.463 1+0 records out 00:05:15.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040852 s, 257 MB/s 00:05:15.463 09:11:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.463 09:11:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:15.463 09:11:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:15.463 09:11:01 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:15.463 09:11:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:15.463 No valid GPT data, bailing 00:05:15.463 09:11:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:15.463 09:11:01 -- scripts/common.sh@391 -- # pt= 00:05:15.463 09:11:01 -- scripts/common.sh@392 -- # return 1 00:05:15.463 09:11:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:15.463 1+0 records in 00:05:15.463 1+0 records out 00:05:15.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046634 s, 225 MB/s 00:05:15.463 09:11:01 -- spdk/autotest.sh@118 -- # sync 00:05:15.463 09:11:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.463 09:11:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.463 09:11:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:17.368 09:11:03 -- spdk/autotest.sh@124 -- # uname -s 00:05:17.368 09:11:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:17.368 09:11:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:17.368 09:11:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.368 09:11:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.368 09:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:17.368 ************************************ 00:05:17.368 START TEST setup.sh 00:05:17.368 ************************************ 00:05:17.368 09:11:03 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:17.368 * Looking for test storage... 00:05:17.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.368 09:11:03 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:17.368 09:11:03 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:17.368 09:11:03 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:17.368 09:11:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.368 09:11:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.368 09:11:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.368 ************************************ 00:05:17.368 START TEST acl 00:05:17.368 ************************************ 00:05:17.368 09:11:03 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:17.626 * Looking for test storage... 00:05:17.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.626 09:11:03 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:17.626 09:11:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.626 09:11:03 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:17.626 09:11:03 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:17.626 09:11:03 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:17.626 09:11:03 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:17.626 09:11:03 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:17.626 09:11:03 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.626 09:11:03 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.001 09:11:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:19.001 09:11:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:19.001 09:11:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.001 09:11:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:19.001 09:11:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.001 09:11:04 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:19.259 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:19.259 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.259 09:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.885 Hugepages 00:05:19.885 node hugesize free / total 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.885 00:05:19.885 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.885 09:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.885 09:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:20.143 09:11:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:20.143 09:11:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.143 09:11:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.143 09:11:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:20.143 ************************************ 00:05:20.143 START TEST denied 00:05:20.143 ************************************ 00:05:20.143 09:11:06 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:20.143 09:11:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:20.143 09:11:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:20.143 09:11:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.143 09:11:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.143 09:11:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:21.516 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.517 09:11:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.080 00:05:28.080 real 0m7.185s 00:05:28.080 user 0m0.851s 00:05:28.080 sys 0m1.366s 00:05:28.080 09:11:13 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.080 09:11:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:28.080 ************************************ 00:05:28.080 END TEST denied 00:05:28.080 ************************************ 00:05:28.080 09:11:13 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:28.080 09:11:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:28.080 09:11:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.080 09:11:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.080 09:11:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:28.080 ************************************ 00:05:28.080 START TEST allowed 00:05:28.080 ************************************ 00:05:28.080 09:11:13 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:28.080 09:11:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:28.080 09:11:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:28.080 09:11:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:28.080 09:11:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.080 09:11:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.342 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.342 09:11:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.722 00:05:29.722 real 0m2.188s 00:05:29.722 user 0m1.050s 00:05:29.722 sys 0m1.126s 00:05:29.722 09:11:15 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.722 ************************************ 00:05:29.722 END TEST allowed 00:05:29.722 ************************************ 00:05:29.722 09:11:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:29.722 09:11:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:29.722 00:05:29.722 real 0m12.039s 00:05:29.722 user 0m3.101s 00:05:29.722 sys 0m3.970s 00:05:29.722 ************************************ 00:05:29.722 END TEST acl 00:05:29.722 ************************************ 00:05:29.722 09:11:15 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.722 09:11:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:29.722 09:11:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:29.722 09:11:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:29.722 09:11:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.722 09:11:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.722 09:11:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:29.722 ************************************ 00:05:29.722 START TEST hugepages 00:05:29.722 ************************************ 00:05:29.722 09:11:15 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:29.722 * Looking for test storage... 00:05:29.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.722 09:11:15 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5795528 kB' 'MemAvailable: 7378672 kB' 'Buffers: 2436 kB' 'Cached: 1796500 kB' 'SwapCached: 0 kB' 'Active: 444408 kB' 'Inactive: 1456440 kB' 'Active(anon): 112424 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456440 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 103652 kB' 'Mapped: 48860 kB' 'Shmem: 10512 kB' 'KReclaimable: 63320 kB' 'Slab: 135948 kB' 'SReclaimable: 63320 kB' 'SUnreclaim: 72628 kB' 'KernelStack: 6372 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.723 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:29.724 09:11:15 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:29.724 09:11:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.724 09:11:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.724 09:11:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.724 ************************************ 00:05:29.724 START TEST default_setup 00:05:29.724 ************************************ 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.724 09:11:15 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.855 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.855 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.855 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.855 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.855 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:30.855 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:30.855 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.117 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916288 kB' 'MemAvailable: 9499264 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 462372 kB' 'Inactive: 1456472 kB' 'Active(anon): 130388 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121224 kB' 'Mapped: 48808 kB' 'Shmem: 10472 kB' 'KReclaimable: 62916 kB' 'Slab: 135296 kB' 'SReclaimable: 62916 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6384 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.118 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915792 kB' 'MemAvailable: 9498772 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461800 kB' 'Inactive: 1456476 kB' 'Active(anon): 129816 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120916 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62916 kB' 'Slab: 135288 kB' 'SReclaimable: 62916 kB' 'SUnreclaim: 72372 kB' 'KernelStack: 6368 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.119 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.120 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915792 kB' 'MemAvailable: 9498772 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461768 kB' 'Inactive: 1456476 kB' 'Active(anon): 129784 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120920 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62916 kB' 'Slab: 135284 kB' 'SReclaimable: 62916 kB' 'SUnreclaim: 72368 kB' 'KernelStack: 6368 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.121 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.122 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:31.123 nr_hugepages=1024 00:05:31.123 resv_hugepages=0 00:05:31.123 surplus_hugepages=0 00:05:31.123 anon_hugepages=0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915792 kB' 'MemAvailable: 9498772 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461956 kB' 'Inactive: 1456476 kB' 'Active(anon): 129972 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121072 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62916 kB' 'Slab: 135280 kB' 'SReclaimable: 62916 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 6352 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.123 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.124 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7915792 kB' 'MemUsed: 4326188 kB' 'SwapCached: 0 kB' 'Active: 461780 kB' 'Inactive: 1456476 kB' 'Active(anon): 129796 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1798932 kB' 'Mapped: 48684 kB' 'AnonPages: 120916 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62916 kB' 'Slab: 135272 kB' 'SReclaimable: 62916 kB' 'SUnreclaim: 72356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.125 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:31.126 node0=1024 expecting 1024 00:05:31.126 ************************************ 00:05:31.126 END TEST default_setup 00:05:31.126 ************************************ 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:31.126 00:05:31.126 real 0m1.437s 00:05:31.126 user 0m0.644s 00:05:31.126 sys 0m0.735s 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.126 09:11:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:31.126 09:11:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:31.126 09:11:17 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:31.126 09:11:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.126 09:11:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.126 09:11:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:31.126 ************************************ 00:05:31.126 START TEST per_node_1G_alloc 00:05:31.126 ************************************ 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.126 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.730 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.730 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.730 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.730 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961732 kB' 'MemAvailable: 10544692 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 462272 kB' 'Inactive: 1456476 kB' 'Active(anon): 130288 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121408 kB' 'Mapped: 48696 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135260 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6404 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.730 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.731 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961508 kB' 'MemAvailable: 10544468 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461484 kB' 'Inactive: 1456476 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120852 kB' 'Mapped: 48760 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135324 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72444 kB' 'KernelStack: 6400 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.732 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.733 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961508 kB' 'MemAvailable: 10544468 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461480 kB' 'Inactive: 1456476 kB' 'Active(anon): 129496 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120628 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135324 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72444 kB' 'KernelStack: 6352 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.734 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.735 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.996 nr_hugepages=512 00:05:31.996 resv_hugepages=0 00:05:31.996 surplus_hugepages=0 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.996 anon_hugepages=0 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.996 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961508 kB' 'MemAvailable: 10544468 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461456 kB' 'Inactive: 1456476 kB' 'Active(anon): 129472 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120864 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135320 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72440 kB' 'KernelStack: 6336 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.997 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8961508 kB' 'MemUsed: 3280472 kB' 'SwapCached: 0 kB' 'Active: 461432 kB' 'Inactive: 1456476 kB' 'Active(anon): 129448 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1798932 kB' 'Mapped: 48684 kB' 'AnonPages: 120844 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135320 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.998 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.999 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:32.000 node0=512 expecting 512 00:05:32.000 ************************************ 00:05:32.000 END TEST per_node_1G_alloc 00:05:32.000 ************************************ 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:32.000 00:05:32.000 real 0m0.711s 00:05:32.000 user 0m0.342s 00:05:32.000 sys 0m0.388s 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.000 09:11:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.000 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:32.000 09:11:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:32.000 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.000 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.000 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.000 ************************************ 00:05:32.000 START TEST even_2G_alloc 00:05:32.000 ************************************ 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.000 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.519 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.519 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.519 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.519 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7912004 kB' 'MemAvailable: 9494964 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 462200 kB' 'Inactive: 1456476 kB' 'Active(anon): 130216 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121580 kB' 'Mapped: 48972 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135320 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72440 kB' 'KernelStack: 6360 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.519 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7912004 kB' 'MemAvailable: 9494964 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461680 kB' 'Inactive: 1456476 kB' 'Active(anon): 129696 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121104 kB' 'Mapped: 48752 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135304 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72424 kB' 'KernelStack: 6400 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.520 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7912004 kB' 'MemAvailable: 9494964 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461732 kB' 'Inactive: 1456476 kB' 'Active(anon): 129748 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 48752 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135304 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72424 kB' 'KernelStack: 6400 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.521 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:32.522 nr_hugepages=1024 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:32.522 resv_hugepages=0 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.522 surplus_hugepages=0 00:05:32.522 anon_hugepages=0 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7912004 kB' 'MemAvailable: 9494964 kB' 'Buffers: 2436 kB' 'Cached: 1796496 kB' 'SwapCached: 0 kB' 'Active: 461772 kB' 'Inactive: 1456476 kB' 'Active(anon): 129788 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121156 kB' 'Mapped: 48752 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135304 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72424 kB' 'KernelStack: 6400 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.522 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.782 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.782 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.782 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.782 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.782 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.783 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7912004 kB' 'MemUsed: 4329976 kB' 'SwapCached: 0 kB' 'Active: 461968 kB' 'Inactive: 1456476 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1798932 kB' 'Mapped: 48752 kB' 'AnonPages: 121088 kB' 'Shmem: 10472 kB' 'KernelStack: 6400 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135304 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.784 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.785 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.786 node0=1024 expecting 1024 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:32.786 00:05:32.786 real 0m0.724s 00:05:32.786 user 0m0.341s 00:05:32.786 sys 0m0.392s 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.786 ************************************ 00:05:32.786 END TEST even_2G_alloc 00:05:32.786 ************************************ 00:05:32.786 09:11:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.786 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:32.786 09:11:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:32.786 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.786 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.786 09:11:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.786 ************************************ 00:05:32.786 START TEST odd_alloc 00:05:32.786 ************************************ 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.786 09:11:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.310 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.310 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.310 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.310 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7901264 kB' 'MemAvailable: 9484232 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 462404 kB' 'Inactive: 1456484 kB' 'Active(anon): 130420 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 121416 kB' 'Mapped: 49216 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135232 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72352 kB' 'KernelStack: 6352 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.310 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.311 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7901264 kB' 'MemAvailable: 9484232 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 462056 kB' 'Inactive: 1456484 kB' 'Active(anon): 130072 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 121032 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135236 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72356 kB' 'KernelStack: 6380 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.312 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.313 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7901264 kB' 'MemAvailable: 9484232 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 462072 kB' 'Inactive: 1456484 kB' 'Active(anon): 130088 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 121052 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135236 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72356 kB' 'KernelStack: 6364 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.314 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:33.315 nr_hugepages=1025 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:33.315 resv_hugepages=0 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.315 surplus_hugepages=0 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.315 anon_hugepages=0 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.315 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7901264 kB' 'MemAvailable: 9484232 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 461760 kB' 'Inactive: 1456484 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 120760 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135232 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72352 kB' 'KernelStack: 6364 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.316 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7902572 kB' 'MemUsed: 4339408 kB' 'SwapCached: 0 kB' 'Active: 461768 kB' 'Inactive: 1456484 kB' 'Active(anon): 129784 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'FilePages: 1798940 kB' 'Mapped: 48916 kB' 'AnonPages: 120748 kB' 'Shmem: 10472 kB' 'KernelStack: 6400 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135224 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.317 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.577 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.578 node0=1025 expecting 1025 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:33.578 00:05:33.578 real 0m0.691s 00:05:33.578 user 0m0.327s 00:05:33.578 sys 0m0.408s 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.578 09:11:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:33.578 ************************************ 00:05:33.578 END TEST odd_alloc 00:05:33.578 ************************************ 00:05:33.578 09:11:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:33.578 09:11:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:33.578 09:11:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.578 09:11:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.578 09:11:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:33.578 ************************************ 00:05:33.578 START TEST custom_alloc 00:05:33.578 ************************************ 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.578 09:11:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.103 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.103 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.103 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.103 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8955204 kB' 'MemAvailable: 10538172 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 462044 kB' 'Inactive: 1456484 kB' 'Active(anon): 130060 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121160 kB' 'Mapped: 48876 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135268 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72388 kB' 'KernelStack: 6384 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.103 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.104 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8955204 kB' 'MemAvailable: 10538172 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 461868 kB' 'Inactive: 1456484 kB' 'Active(anon): 129884 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121208 kB' 'Mapped: 48692 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135264 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72384 kB' 'KernelStack: 6336 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.106 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8955204 kB' 'MemAvailable: 10538172 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 461612 kB' 'Inactive: 1456484 kB' 'Active(anon): 129628 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120984 kB' 'Mapped: 48692 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135252 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72372 kB' 'KernelStack: 6368 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.107 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:34.108 nr_hugepages=512 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:34.108 resv_hugepages=0 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.108 surplus_hugepages=0 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.108 anon_hugepages=0 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.108 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8955204 kB' 'MemAvailable: 10538172 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 461552 kB' 'Inactive: 1456484 kB' 'Active(anon): 129568 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120712 kB' 'Mapped: 48692 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135252 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72372 kB' 'KernelStack: 6368 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.109 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8955536 kB' 'MemUsed: 3286444 kB' 'SwapCached: 0 kB' 'Active: 461524 kB' 'Inactive: 1456484 kB' 'Active(anon): 129540 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1798940 kB' 'Mapped: 48692 kB' 'AnonPages: 120944 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135252 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.110 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.111 node0=512 expecting 512 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:34.111 00:05:34.111 real 0m0.684s 00:05:34.111 user 0m0.307s 00:05:34.111 sys 0m0.420s 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.111 ************************************ 00:05:34.111 END TEST custom_alloc 00:05:34.111 ************************************ 00:05:34.111 09:11:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:34.111 09:11:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:34.111 09:11:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:34.111 09:11:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.111 09:11:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.111 09:11:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:34.370 ************************************ 00:05:34.370 START TEST no_shrink_alloc 00:05:34.370 ************************************ 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.370 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.629 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.629 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.629 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.629 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.629 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908360 kB' 'MemAvailable: 9491328 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 462396 kB' 'Inactive: 1456484 kB' 'Active(anon): 130412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121280 kB' 'Mapped: 48844 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135268 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72388 kB' 'KernelStack: 6392 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.630 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.894 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908360 kB' 'MemAvailable: 9491328 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 461656 kB' 'Inactive: 1456484 kB' 'Active(anon): 129672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120772 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135296 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72416 kB' 'KernelStack: 6368 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.895 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.896 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7908360 kB' 'MemAvailable: 9491328 kB' 'Buffers: 2436 kB' 'Cached: 1796504 kB' 'SwapCached: 0 kB' 'Active: 461728 kB' 'Inactive: 1456484 kB' 'Active(anon): 129744 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456484 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120864 kB' 'Mapped: 48944 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135284 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72404 kB' 'KernelStack: 6368 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.897 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.898 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:34.899 nr_hugepages=1024 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:34.899 resv_hugepages=0 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.899 surplus_hugepages=0 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.899 anon_hugepages=0 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910008 kB' 'MemAvailable: 9492968 kB' 'Buffers: 2436 kB' 'Cached: 1796500 kB' 'SwapCached: 0 kB' 'Active: 459184 kB' 'Inactive: 1456480 kB' 'Active(anon): 127200 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118296 kB' 'Mapped: 48104 kB' 'Shmem: 10472 kB' 'KReclaimable: 62872 kB' 'Slab: 135248 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72376 kB' 'KernelStack: 6320 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.899 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.900 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910140 kB' 'MemUsed: 4331840 kB' 'SwapCached: 0 kB' 'Active: 458968 kB' 'Inactive: 1456480 kB' 'Active(anon): 126984 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1798936 kB' 'Mapped: 47944 kB' 'AnonPages: 118128 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62872 kB' 'Slab: 135220 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.901 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.902 node0=1024 expecting 1024 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.902 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.424 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.424 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.424 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.424 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.424 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910912 kB' 'MemAvailable: 9493872 kB' 'Buffers: 2436 kB' 'Cached: 1796500 kB' 'SwapCached: 0 kB' 'Active: 459824 kB' 'Inactive: 1456480 kB' 'Active(anon): 127840 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119000 kB' 'Mapped: 48300 kB' 'Shmem: 10472 kB' 'KReclaimable: 62872 kB' 'Slab: 135152 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72280 kB' 'KernelStack: 6400 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.424 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.425 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910912 kB' 'MemAvailable: 9493872 kB' 'Buffers: 2436 kB' 'Cached: 1796500 kB' 'SwapCached: 0 kB' 'Active: 459784 kB' 'Inactive: 1456480 kB' 'Active(anon): 127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118924 kB' 'Mapped: 48048 kB' 'Shmem: 10472 kB' 'KReclaimable: 62872 kB' 'Slab: 135152 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72280 kB' 'KernelStack: 6336 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.426 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.427 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910660 kB' 'MemAvailable: 9493620 kB' 'Buffers: 2436 kB' 'Cached: 1796500 kB' 'SwapCached: 0 kB' 'Active: 459340 kB' 'Inactive: 1456480 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 47948 kB' 'Shmem: 10472 kB' 'KReclaimable: 62872 kB' 'Slab: 135144 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72272 kB' 'KernelStack: 6272 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.428 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:35.429 nr_hugepages=1024 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:35.429 resv_hugepages=0 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.429 surplus_hugepages=0 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.429 anon_hugepages=0 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.429 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910660 kB' 'MemAvailable: 9493620 kB' 'Buffers: 2436 kB' 'Cached: 1796500 kB' 'SwapCached: 0 kB' 'Active: 459048 kB' 'Inactive: 1456480 kB' 'Active(anon): 127064 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 47948 kB' 'Shmem: 10472 kB' 'KReclaimable: 62872 kB' 'Slab: 135144 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72272 kB' 'KernelStack: 6256 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.430 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910660 kB' 'MemUsed: 4331320 kB' 'SwapCached: 0 kB' 'Active: 459044 kB' 'Inactive: 1456480 kB' 'Active(anon): 127060 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1798936 kB' 'Mapped: 47948 kB' 'AnonPages: 118468 kB' 'Shmem: 10472 kB' 'KernelStack: 6324 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62872 kB' 'Slab: 135144 kB' 'SReclaimable: 62872 kB' 'SUnreclaim: 72272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.431 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.432 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.433 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.691 node0=1024 expecting 1024 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:35.691 00:05:35.691 real 0m1.323s 00:05:35.691 user 0m0.614s 00:05:35.691 sys 0m0.800s 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.691 ************************************ 00:05:35.691 END TEST no_shrink_alloc 00:05:35.691 ************************************ 00:05:35.691 09:11:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 09:11:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:35.691 09:11:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:35.691 ************************************ 00:05:35.691 END TEST hugepages 00:05:35.691 ************************************ 00:05:35.691 00:05:35.691 real 0m6.014s 00:05:35.691 user 0m2.738s 00:05:35.691 sys 0m3.401s 00:05:35.691 09:11:21 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.691 09:11:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 09:11:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:35.691 09:11:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:35.691 09:11:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.691 09:11:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.691 09:11:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 START TEST driver 00:05:35.691 ************************************ 00:05:35.691 09:11:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:35.691 * Looking for test storage... 00:05:35.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:35.691 09:11:21 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:35.691 09:11:21 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.691 09:11:21 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.257 09:11:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:42.257 09:11:27 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.257 09:11:27 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.257 09:11:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:42.257 ************************************ 00:05:42.257 START TEST guess_driver 00:05:42.257 ************************************ 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:42.257 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:42.257 Looking for driver=uio_pci_generic 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.257 09:11:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.257 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:42.257 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:42.257 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.822 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.822 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.822 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.823 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.823 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.823 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.823 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.823 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.823 09:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.823 09:11:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.379 00:05:49.379 real 0m7.137s 00:05:49.379 user 0m0.820s 00:05:49.379 sys 0m1.396s 00:05:49.379 09:11:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.379 09:11:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:49.379 ************************************ 00:05:49.379 END TEST guess_driver 00:05:49.379 ************************************ 00:05:49.379 09:11:35 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:49.379 00:05:49.379 real 0m13.166s 00:05:49.379 user 0m1.188s 00:05:49.379 sys 0m2.161s 00:05:49.379 09:11:35 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.379 09:11:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:49.379 ************************************ 00:05:49.379 END TEST driver 00:05:49.379 ************************************ 00:05:49.379 09:11:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:49.379 09:11:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:49.379 09:11:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.379 09:11:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.379 09:11:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:49.379 ************************************ 00:05:49.379 START TEST devices 00:05:49.379 ************************************ 00:05:49.379 09:11:35 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:49.379 * Looking for test storage... 00:05:49.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:49.379 09:11:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:49.379 09:11:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:49.379 09:11:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:49.379 09:11:35 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:49.946 09:11:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:49.946 No valid GPT data, bailing 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:49.946 09:11:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:49.946 09:11:36 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:49.946 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:49.946 09:11:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:50.204 No valid GPT data, bailing 00:05:50.204 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:50.204 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:50.205 No valid GPT data, bailing 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:50.205 No valid GPT data, bailing 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:50.205 09:11:36 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:50.205 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:50.205 09:11:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:50.464 No valid GPT data, bailing 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:50.464 09:11:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:50.464 09:11:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:50.464 09:11:36 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:50.464 No valid GPT data, bailing 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:50.464 09:11:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:50.464 09:11:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:50.464 09:11:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:50.464 09:11:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:50.464 09:11:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:50.464 09:11:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.464 09:11:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.464 09:11:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:50.464 ************************************ 00:05:50.464 START TEST nvme_mount 00:05:50.464 ************************************ 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:50.464 09:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:51.398 Creating new GPT entries in memory. 00:05:51.398 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:51.398 other utilities. 00:05:51.398 09:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:51.398 09:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.398 09:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:51.398 09:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:51.398 09:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:52.773 Creating new GPT entries in memory. 00:05:52.773 The operation has completed successfully. 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59769 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:52.773 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.774 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.774 09:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.032 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.032 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.032 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.032 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.032 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.032 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.291 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.291 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:53.549 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.549 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:53.808 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:53.808 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:53.808 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:53.808 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.808 09:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.066 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.324 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.324 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.324 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.324 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.582 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.582 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.582 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.582 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:54.582 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.840 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.841 09:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.098 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.356 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.356 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.356 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.356 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:55.615 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:55.615 00:05:55.615 real 0m5.278s 00:05:55.615 user 0m1.405s 00:05:55.615 sys 0m1.572s 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.615 ************************************ 00:05:55.615 END TEST nvme_mount 00:05:55.615 ************************************ 00:05:55.615 09:11:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:55.873 09:11:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:55.873 09:11:41 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:55.873 09:11:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.873 09:11:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.873 09:11:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:55.873 ************************************ 00:05:55.873 START TEST dm_mount 00:05:55.873 ************************************ 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:55.873 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:55.874 09:11:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:56.807 Creating new GPT entries in memory. 00:05:56.807 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:56.807 other utilities. 00:05:56.807 09:11:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:56.807 09:11:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.807 09:11:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:56.807 09:11:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:56.807 09:11:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:57.741 Creating new GPT entries in memory. 00:05:57.741 The operation has completed successfully. 00:05:57.741 09:11:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:57.741 09:11:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.741 09:11:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:57.741 09:11:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:57.741 09:11:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:59.116 The operation has completed successfully. 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60394 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.116 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.374 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.374 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.374 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.374 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.374 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.374 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.632 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:59.632 09:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.890 09:11:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:00.148 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:00.148 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:00.148 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:00.148 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.148 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:00.148 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.406 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:00.406 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.406 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:00.406 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.406 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:00.406 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.663 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:00.663 09:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:00.921 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:00.921 00:06:00.921 real 0m5.186s 00:06:00.921 user 0m1.042s 00:06:00.921 sys 0m1.058s 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.921 09:11:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:00.921 ************************************ 00:06:00.921 END TEST dm_mount 00:06:00.921 ************************************ 00:06:00.921 09:11:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:00.921 09:11:47 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:01.179 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:01.179 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:01.179 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:01.179 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:01.179 09:11:47 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:01.179 00:06:01.179 real 0m12.447s 00:06:01.179 user 0m3.371s 00:06:01.179 sys 0m3.396s 00:06:01.179 09:11:47 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.179 09:11:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:01.179 ************************************ 00:06:01.179 END TEST devices 00:06:01.179 ************************************ 00:06:01.436 09:11:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:01.436 00:06:01.436 ************************************ 00:06:01.436 END TEST setup.sh 00:06:01.436 ************************************ 00:06:01.436 real 0m43.946s 00:06:01.436 user 0m10.508s 00:06:01.436 sys 0m13.088s 00:06:01.436 09:11:47 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.436 09:11:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:01.436 09:11:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.436 09:11:47 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:02.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.258 Hugepages 00:06:02.258 node hugesize free / total 00:06:02.258 node0 1048576kB 0 / 0 00:06:02.258 node0 2048kB 2048 / 2048 00:06:02.258 00:06:02.258 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:02.516 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:02.516 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:02.516 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:02.516 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:02.819 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:02.819 09:11:48 -- spdk/autotest.sh@130 -- # uname -s 00:06:02.819 09:11:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:02.819 09:11:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:02.819 09:11:48 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.010 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.010 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.010 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.010 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.010 09:11:50 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:04.944 09:11:51 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:04.944 09:11:51 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:04.944 09:11:51 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:04.944 09:11:51 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:04.944 09:11:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:04.944 09:11:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:04.944 09:11:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:04.944 09:11:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:04.944 09:11:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:04.944 09:11:51 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:04.944 09:11:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:04.944 09:11:51 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:05.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.509 Waiting for block devices as requested 00:06:05.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:05.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:05.768 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:05.768 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.045 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:11.045 09:11:57 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:11.045 09:11:57 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:11.045 09:11:57 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:11.045 09:11:57 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1557 -- # continue 00:06:11.045 09:11:57 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:11.045 09:11:57 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1557 -- # continue 00:06:11.045 09:11:57 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:11.045 09:11:57 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1557 -- # continue 00:06:11.045 09:11:57 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:11.045 09:11:57 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:11.045 09:11:57 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:11.045 09:11:57 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:11.045 09:11:57 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:11.045 09:11:57 -- common/autotest_common.sh@1557 -- # continue 00:06:11.045 09:11:57 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:11.045 09:11:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.045 09:11:57 -- common/autotest_common.sh@10 -- # set +x 00:06:11.045 09:11:57 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:11.045 09:11:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.045 09:11:57 -- common/autotest_common.sh@10 -- # set +x 00:06:11.045 09:11:57 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.182 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.182 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.182 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.182 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.441 09:11:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:12.441 09:11:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.441 09:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.441 09:11:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:12.441 09:11:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:12.441 09:11:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:12.441 09:11:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:12.441 09:11:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:12.441 09:11:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:12.441 09:11:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:12.441 09:11:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:12.441 09:11:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:12.441 09:11:58 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:12.441 09:11:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:12.441 09:11:58 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:12.441 09:11:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:12.441 09:11:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:12.441 09:11:58 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:12.441 09:11:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:12.441 09:11:58 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:12.441 09:11:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:12.441 09:11:58 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:12.441 09:11:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:12.441 09:11:58 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:12.441 09:11:58 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:12.441 09:11:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:12.441 09:11:58 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:12.441 09:11:58 -- common/autotest_common.sh@1593 -- # return 0 00:06:12.441 09:11:58 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:12.441 09:11:58 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:12.441 09:11:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:12.441 09:11:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:12.441 09:11:58 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:12.441 09:11:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.441 09:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.441 09:11:58 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:12.441 09:11:58 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:12.441 09:11:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.441 09:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.441 09:11:58 -- common/autotest_common.sh@10 -- # set +x 00:06:12.441 ************************************ 00:06:12.441 START TEST env 00:06:12.441 ************************************ 00:06:12.441 09:11:58 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:12.699 * Looking for test storage... 00:06:12.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:12.699 09:11:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:12.699 09:11:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.699 09:11:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.699 09:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.699 ************************************ 00:06:12.699 START TEST env_memory 00:06:12.699 ************************************ 00:06:12.700 09:11:58 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:12.700 00:06:12.700 00:06:12.700 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.700 http://cunit.sourceforge.net/ 00:06:12.700 00:06:12.700 00:06:12.700 Suite: memory 00:06:12.700 Test: alloc and free memory map ...[2024-07-12 09:11:58.907882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:12.700 passed 00:06:12.700 Test: mem map translation ...[2024-07-12 09:11:58.968606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:12.700 [2024-07-12 09:11:58.968700] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:12.700 [2024-07-12 09:11:58.968802] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:12.700 [2024-07-12 09:11:58.968835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:12.958 passed 00:06:12.958 Test: mem map registration ...[2024-07-12 09:11:59.068134] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:12.958 [2024-07-12 09:11:59.068243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:12.958 passed 00:06:12.958 Test: mem map adjacent registrations ...passed 00:06:12.958 00:06:12.958 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.958 suites 1 1 n/a 0 0 00:06:12.958 tests 4 4 4 0 0 00:06:12.958 asserts 152 152 152 0 n/a 00:06:12.958 00:06:12.958 Elapsed time = 0.345 seconds 00:06:12.958 00:06:12.958 real 0m0.390s 00:06:12.958 user 0m0.357s 00:06:12.958 sys 0m0.028s 00:06:12.958 09:11:59 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.958 09:11:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:12.958 ************************************ 00:06:12.958 END TEST env_memory 00:06:12.958 ************************************ 00:06:12.958 09:11:59 env -- common/autotest_common.sh@1142 -- # return 0 00:06:12.958 09:11:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:12.958 09:11:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.958 09:11:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.958 09:11:59 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.958 ************************************ 00:06:12.958 START TEST env_vtophys 00:06:12.958 ************************************ 00:06:12.958 09:11:59 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:13.216 EAL: lib.eal log level changed from notice to debug 00:06:13.216 EAL: Detected lcore 0 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 1 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 2 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 3 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 4 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 5 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 6 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 7 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 8 as core 0 on socket 0 00:06:13.216 EAL: Detected lcore 9 as core 0 on socket 0 00:06:13.216 EAL: Maximum logical cores by configuration: 128 00:06:13.216 EAL: Detected CPU lcores: 10 00:06:13.216 EAL: Detected NUMA nodes: 1 00:06:13.216 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:13.216 EAL: Detected shared linkage of DPDK 00:06:13.216 EAL: No shared files mode enabled, IPC will be disabled 00:06:13.216 EAL: Selected IOVA mode 'PA' 00:06:13.216 EAL: Probing VFIO support... 00:06:13.216 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:13.216 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:13.216 EAL: Ask a virtual area of 0x2e000 bytes 00:06:13.216 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:13.216 EAL: Setting up physically contiguous memory... 00:06:13.216 EAL: Setting maximum number of open files to 524288 00:06:13.216 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:13.216 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:13.216 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.216 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:13.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.216 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.216 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:13.216 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:13.216 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.216 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:13.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.216 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.216 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:13.216 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:13.216 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.216 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:13.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.216 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.216 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:13.216 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:13.216 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.216 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:13.216 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.216 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.216 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:13.216 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:13.216 EAL: Hugepages will be freed exactly as allocated. 00:06:13.216 EAL: No shared files mode enabled, IPC is disabled 00:06:13.216 EAL: No shared files mode enabled, IPC is disabled 00:06:13.216 EAL: TSC frequency is ~2200000 KHz 00:06:13.216 EAL: Main lcore 0 is ready (tid=7f7274c0fa40;cpuset=[0]) 00:06:13.216 EAL: Trying to obtain current memory policy. 00:06:13.216 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.216 EAL: Restoring previous memory policy: 0 00:06:13.216 EAL: request: mp_malloc_sync 00:06:13.216 EAL: No shared files mode enabled, IPC is disabled 00:06:13.216 EAL: Heap on socket 0 was expanded by 2MB 00:06:13.216 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:13.216 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:13.216 EAL: Mem event callback 'spdk:(nil)' registered 00:06:13.216 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:13.216 00:06:13.216 00:06:13.216 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.216 http://cunit.sourceforge.net/ 00:06:13.216 00:06:13.216 00:06:13.216 Suite: components_suite 00:06:13.783 Test: vtophys_malloc_test ...passed 00:06:13.783 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:13.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.783 EAL: Restoring previous memory policy: 4 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was expanded by 4MB 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was shrunk by 4MB 00:06:13.783 EAL: Trying to obtain current memory policy. 00:06:13.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.783 EAL: Restoring previous memory policy: 4 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was expanded by 6MB 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was shrunk by 6MB 00:06:13.783 EAL: Trying to obtain current memory policy. 00:06:13.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.783 EAL: Restoring previous memory policy: 4 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was expanded by 10MB 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was shrunk by 10MB 00:06:13.783 EAL: Trying to obtain current memory policy. 00:06:13.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.783 EAL: Restoring previous memory policy: 4 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.783 EAL: request: mp_malloc_sync 00:06:13.783 EAL: No shared files mode enabled, IPC is disabled 00:06:13.783 EAL: Heap on socket 0 was expanded by 18MB 00:06:13.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.784 EAL: request: mp_malloc_sync 00:06:13.784 EAL: No shared files mode enabled, IPC is disabled 00:06:13.784 EAL: Heap on socket 0 was shrunk by 18MB 00:06:13.784 EAL: Trying to obtain current memory policy. 00:06:13.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.784 EAL: Restoring previous memory policy: 4 00:06:13.784 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.784 EAL: request: mp_malloc_sync 00:06:13.784 EAL: No shared files mode enabled, IPC is disabled 00:06:13.784 EAL: Heap on socket 0 was expanded by 34MB 00:06:13.784 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.784 EAL: request: mp_malloc_sync 00:06:13.784 EAL: No shared files mode enabled, IPC is disabled 00:06:13.784 EAL: Heap on socket 0 was shrunk by 34MB 00:06:13.784 EAL: Trying to obtain current memory policy. 00:06:13.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.784 EAL: Restoring previous memory policy: 4 00:06:13.784 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.784 EAL: request: mp_malloc_sync 00:06:13.784 EAL: No shared files mode enabled, IPC is disabled 00:06:13.784 EAL: Heap on socket 0 was expanded by 66MB 00:06:14.042 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.042 EAL: request: mp_malloc_sync 00:06:14.042 EAL: No shared files mode enabled, IPC is disabled 00:06:14.042 EAL: Heap on socket 0 was shrunk by 66MB 00:06:14.042 EAL: Trying to obtain current memory policy. 00:06:14.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.042 EAL: Restoring previous memory policy: 4 00:06:14.042 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.042 EAL: request: mp_malloc_sync 00:06:14.042 EAL: No shared files mode enabled, IPC is disabled 00:06:14.042 EAL: Heap on socket 0 was expanded by 130MB 00:06:14.300 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.300 EAL: request: mp_malloc_sync 00:06:14.300 EAL: No shared files mode enabled, IPC is disabled 00:06:14.300 EAL: Heap on socket 0 was shrunk by 130MB 00:06:14.558 EAL: Trying to obtain current memory policy. 00:06:14.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.558 EAL: Restoring previous memory policy: 4 00:06:14.558 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.558 EAL: request: mp_malloc_sync 00:06:14.558 EAL: No shared files mode enabled, IPC is disabled 00:06:14.558 EAL: Heap on socket 0 was expanded by 258MB 00:06:14.817 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.817 EAL: request: mp_malloc_sync 00:06:14.817 EAL: No shared files mode enabled, IPC is disabled 00:06:14.817 EAL: Heap on socket 0 was shrunk by 258MB 00:06:15.383 EAL: Trying to obtain current memory policy. 00:06:15.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.383 EAL: Restoring previous memory policy: 4 00:06:15.383 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.383 EAL: request: mp_malloc_sync 00:06:15.383 EAL: No shared files mode enabled, IPC is disabled 00:06:15.383 EAL: Heap on socket 0 was expanded by 514MB 00:06:16.316 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.317 EAL: request: mp_malloc_sync 00:06:16.317 EAL: No shared files mode enabled, IPC is disabled 00:06:16.317 EAL: Heap on socket 0 was shrunk by 514MB 00:06:16.882 EAL: Trying to obtain current memory policy. 00:06:16.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.140 EAL: Restoring previous memory policy: 4 00:06:17.140 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.140 EAL: request: mp_malloc_sync 00:06:17.140 EAL: No shared files mode enabled, IPC is disabled 00:06:17.140 EAL: Heap on socket 0 was expanded by 1026MB 00:06:18.516 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.775 EAL: request: mp_malloc_sync 00:06:18.775 EAL: No shared files mode enabled, IPC is disabled 00:06:18.775 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:20.150 passed 00:06:20.150 00:06:20.150 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.150 suites 1 1 n/a 0 0 00:06:20.150 tests 2 2 2 0 0 00:06:20.150 asserts 5362 5362 5362 0 n/a 00:06:20.150 00:06:20.150 Elapsed time = 6.784 seconds 00:06:20.150 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.150 EAL: request: mp_malloc_sync 00:06:20.150 EAL: No shared files mode enabled, IPC is disabled 00:06:20.150 EAL: Heap on socket 0 was shrunk by 2MB 00:06:20.150 EAL: No shared files mode enabled, IPC is disabled 00:06:20.150 EAL: No shared files mode enabled, IPC is disabled 00:06:20.151 EAL: No shared files mode enabled, IPC is disabled 00:06:20.151 00:06:20.151 real 0m7.104s 00:06:20.151 user 0m6.255s 00:06:20.151 sys 0m0.688s 00:06:20.151 ************************************ 00:06:20.151 END TEST env_vtophys 00:06:20.151 ************************************ 00:06:20.151 09:12:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.151 09:12:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:20.151 09:12:06 env -- common/autotest_common.sh@1142 -- # return 0 00:06:20.151 09:12:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:20.151 09:12:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.151 09:12:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.151 09:12:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.151 ************************************ 00:06:20.151 START TEST env_pci 00:06:20.151 ************************************ 00:06:20.151 09:12:06 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:20.151 00:06:20.151 00:06:20.151 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.151 http://cunit.sourceforge.net/ 00:06:20.151 00:06:20.151 00:06:20.151 Suite: pci 00:06:20.151 Test: pci_hook ...[2024-07-12 09:12:06.463643] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62221 has claimed it 00:06:20.151 passed 00:06:20.151 00:06:20.151 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.151 suites 1 1 n/a 0 0 00:06:20.151 tests 1 1 1 0 0 00:06:20.151 asserts 25 25 25 0 n/a 00:06:20.151 00:06:20.151 Elapsed time = 0.007 seconds 00:06:20.151 EAL: Cannot find device (10000:00:01.0) 00:06:20.151 EAL: Failed to attach device on primary process 00:06:20.409 ************************************ 00:06:20.409 END TEST env_pci 00:06:20.409 ************************************ 00:06:20.409 00:06:20.409 real 0m0.082s 00:06:20.409 user 0m0.038s 00:06:20.409 sys 0m0.043s 00:06:20.409 09:12:06 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.409 09:12:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:20.409 09:12:06 env -- common/autotest_common.sh@1142 -- # return 0 00:06:20.409 09:12:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:20.409 09:12:06 env -- env/env.sh@15 -- # uname 00:06:20.409 09:12:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:20.409 09:12:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:20.409 09:12:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.409 09:12:06 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:20.409 09:12:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.409 09:12:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.409 ************************************ 00:06:20.409 START TEST env_dpdk_post_init 00:06:20.409 ************************************ 00:06:20.409 09:12:06 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.409 EAL: Detected CPU lcores: 10 00:06:20.409 EAL: Detected NUMA nodes: 1 00:06:20.409 EAL: Detected shared linkage of DPDK 00:06:20.409 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.409 EAL: Selected IOVA mode 'PA' 00:06:20.668 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.668 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:20.668 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:20.668 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:20.668 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:20.668 Starting DPDK initialization... 00:06:20.668 Starting SPDK post initialization... 00:06:20.668 SPDK NVMe probe 00:06:20.668 Attaching to 0000:00:10.0 00:06:20.668 Attaching to 0000:00:11.0 00:06:20.668 Attaching to 0000:00:12.0 00:06:20.668 Attaching to 0000:00:13.0 00:06:20.668 Attached to 0000:00:10.0 00:06:20.668 Attached to 0000:00:11.0 00:06:20.668 Attached to 0000:00:13.0 00:06:20.668 Attached to 0000:00:12.0 00:06:20.668 Cleaning up... 00:06:20.668 00:06:20.668 real 0m0.292s 00:06:20.668 user 0m0.097s 00:06:20.668 sys 0m0.096s 00:06:20.668 09:12:06 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.668 09:12:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.668 ************************************ 00:06:20.668 END TEST env_dpdk_post_init 00:06:20.668 ************************************ 00:06:20.668 09:12:06 env -- common/autotest_common.sh@1142 -- # return 0 00:06:20.668 09:12:06 env -- env/env.sh@26 -- # uname 00:06:20.668 09:12:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:20.668 09:12:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.668 09:12:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.668 09:12:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.668 09:12:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.668 ************************************ 00:06:20.668 START TEST env_mem_callbacks 00:06:20.668 ************************************ 00:06:20.668 09:12:06 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.668 EAL: Detected CPU lcores: 10 00:06:20.668 EAL: Detected NUMA nodes: 1 00:06:20.668 EAL: Detected shared linkage of DPDK 00:06:20.668 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.668 EAL: Selected IOVA mode 'PA' 00:06:20.926 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.926 00:06:20.926 00:06:20.926 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.926 http://cunit.sourceforge.net/ 00:06:20.926 00:06:20.926 00:06:20.926 Suite: memory 00:06:20.926 Test: test ... 00:06:20.926 register 0x200000200000 2097152 00:06:20.926 malloc 3145728 00:06:20.926 register 0x200000400000 4194304 00:06:20.926 buf 0x2000004fffc0 len 3145728 PASSED 00:06:20.926 malloc 64 00:06:20.926 buf 0x2000004ffec0 len 64 PASSED 00:06:20.926 malloc 4194304 00:06:20.926 register 0x200000800000 6291456 00:06:20.926 buf 0x2000009fffc0 len 4194304 PASSED 00:06:20.926 free 0x2000004fffc0 3145728 00:06:20.926 free 0x2000004ffec0 64 00:06:20.926 unregister 0x200000400000 4194304 PASSED 00:06:20.926 free 0x2000009fffc0 4194304 00:06:20.926 unregister 0x200000800000 6291456 PASSED 00:06:20.926 malloc 8388608 00:06:20.926 register 0x200000400000 10485760 00:06:20.926 buf 0x2000005fffc0 len 8388608 PASSED 00:06:20.926 free 0x2000005fffc0 8388608 00:06:20.926 unregister 0x200000400000 10485760 PASSED 00:06:20.926 passed 00:06:20.926 00:06:20.926 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.926 suites 1 1 n/a 0 0 00:06:20.926 tests 1 1 1 0 0 00:06:20.926 asserts 15 15 15 0 n/a 00:06:20.926 00:06:20.926 Elapsed time = 0.062 seconds 00:06:20.926 00:06:20.926 real 0m0.265s 00:06:20.926 user 0m0.107s 00:06:20.926 sys 0m0.055s 00:06:20.926 09:12:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.926 09:12:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:20.926 ************************************ 00:06:20.926 END TEST env_mem_callbacks 00:06:20.926 ************************************ 00:06:20.926 09:12:07 env -- common/autotest_common.sh@1142 -- # return 0 00:06:20.926 00:06:20.926 real 0m8.466s 00:06:20.926 user 0m6.977s 00:06:20.926 sys 0m1.104s 00:06:20.926 ************************************ 00:06:20.926 END TEST env 00:06:20.926 ************************************ 00:06:20.926 09:12:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.926 09:12:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.926 09:12:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.926 09:12:07 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:20.926 09:12:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.926 09:12:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.926 09:12:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.926 ************************************ 00:06:20.926 START TEST rpc 00:06:20.926 ************************************ 00:06:20.926 09:12:07 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:21.184 * Looking for test storage... 00:06:21.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.184 09:12:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62335 00:06:21.184 09:12:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:21.184 09:12:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.184 09:12:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62335 00:06:21.184 09:12:07 rpc -- common/autotest_common.sh@829 -- # '[' -z 62335 ']' 00:06:21.184 09:12:07 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.184 09:12:07 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.184 09:12:07 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.184 09:12:07 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.184 09:12:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.184 [2024-07-12 09:12:07.464866] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:21.184 [2024-07-12 09:12:07.465717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62335 ] 00:06:21.442 [2024-07-12 09:12:07.636855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.700 [2024-07-12 09:12:07.825408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:21.700 [2024-07-12 09:12:07.825490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62335' to capture a snapshot of events at runtime. 00:06:21.700 [2024-07-12 09:12:07.825511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.700 [2024-07-12 09:12:07.825523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.700 [2024-07-12 09:12:07.825537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62335 for offline analysis/debug. 00:06:21.700 [2024-07-12 09:12:07.825592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.267 09:12:08 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.267 09:12:08 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:22.267 09:12:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:22.267 09:12:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:22.267 09:12:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:22.267 09:12:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:22.267 09:12:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.267 09:12:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.267 09:12:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.267 ************************************ 00:06:22.267 START TEST rpc_integrity 00:06:22.267 ************************************ 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.267 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.267 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:22.267 { 00:06:22.267 "name": "Malloc0", 00:06:22.267 "aliases": [ 00:06:22.267 "38f445eb-a86c-4657-bafa-25002ec9eb7c" 00:06:22.267 ], 00:06:22.267 "product_name": "Malloc disk", 00:06:22.267 "block_size": 512, 00:06:22.267 "num_blocks": 16384, 00:06:22.267 "uuid": "38f445eb-a86c-4657-bafa-25002ec9eb7c", 00:06:22.267 "assigned_rate_limits": { 00:06:22.267 "rw_ios_per_sec": 0, 00:06:22.267 "rw_mbytes_per_sec": 0, 00:06:22.267 "r_mbytes_per_sec": 0, 00:06:22.267 "w_mbytes_per_sec": 0 00:06:22.267 }, 00:06:22.267 "claimed": false, 00:06:22.267 "zoned": false, 00:06:22.267 "supported_io_types": { 00:06:22.267 "read": true, 00:06:22.267 "write": true, 00:06:22.267 "unmap": true, 00:06:22.267 "flush": true, 00:06:22.267 "reset": true, 00:06:22.267 "nvme_admin": false, 00:06:22.267 "nvme_io": false, 00:06:22.267 "nvme_io_md": false, 00:06:22.267 "write_zeroes": true, 00:06:22.267 "zcopy": true, 00:06:22.267 "get_zone_info": false, 00:06:22.267 "zone_management": false, 00:06:22.267 "zone_append": false, 00:06:22.267 "compare": false, 00:06:22.267 "compare_and_write": false, 00:06:22.267 "abort": true, 00:06:22.267 "seek_hole": false, 00:06:22.267 "seek_data": false, 00:06:22.267 "copy": true, 00:06:22.267 "nvme_iov_md": false 00:06:22.267 }, 00:06:22.267 "memory_domains": [ 00:06:22.267 { 00:06:22.267 "dma_device_id": "system", 00:06:22.267 "dma_device_type": 1 00:06:22.267 }, 00:06:22.267 { 00:06:22.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.267 "dma_device_type": 2 00:06:22.267 } 00:06:22.267 ], 00:06:22.267 "driver_specific": {} 00:06:22.267 } 00:06:22.267 ]' 00:06:22.526 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.526 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.526 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:22.526 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.526 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.526 [2024-07-12 09:12:08.666289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:22.526 [2024-07-12 09:12:08.666378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.526 [2024-07-12 09:12:08.666424] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:22.526 [2024-07-12 09:12:08.666440] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.526 [2024-07-12 09:12:08.669095] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.526 [2024-07-12 09:12:08.669154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.526 Passthru0 00:06:22.526 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.526 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.526 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.526 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.526 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.526 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.526 { 00:06:22.526 "name": "Malloc0", 00:06:22.526 "aliases": [ 00:06:22.526 "38f445eb-a86c-4657-bafa-25002ec9eb7c" 00:06:22.526 ], 00:06:22.526 "product_name": "Malloc disk", 00:06:22.526 "block_size": 512, 00:06:22.526 "num_blocks": 16384, 00:06:22.526 "uuid": "38f445eb-a86c-4657-bafa-25002ec9eb7c", 00:06:22.526 "assigned_rate_limits": { 00:06:22.526 "rw_ios_per_sec": 0, 00:06:22.526 "rw_mbytes_per_sec": 0, 00:06:22.526 "r_mbytes_per_sec": 0, 00:06:22.526 "w_mbytes_per_sec": 0 00:06:22.526 }, 00:06:22.526 "claimed": true, 00:06:22.526 "claim_type": "exclusive_write", 00:06:22.526 "zoned": false, 00:06:22.526 "supported_io_types": { 00:06:22.526 "read": true, 00:06:22.526 "write": true, 00:06:22.526 "unmap": true, 00:06:22.526 "flush": true, 00:06:22.526 "reset": true, 00:06:22.526 "nvme_admin": false, 00:06:22.526 "nvme_io": false, 00:06:22.526 "nvme_io_md": false, 00:06:22.526 "write_zeroes": true, 00:06:22.526 "zcopy": true, 00:06:22.526 "get_zone_info": false, 00:06:22.526 "zone_management": false, 00:06:22.526 "zone_append": false, 00:06:22.526 "compare": false, 00:06:22.526 "compare_and_write": false, 00:06:22.526 "abort": true, 00:06:22.526 "seek_hole": false, 00:06:22.526 "seek_data": false, 00:06:22.526 "copy": true, 00:06:22.526 "nvme_iov_md": false 00:06:22.526 }, 00:06:22.526 "memory_domains": [ 00:06:22.526 { 00:06:22.526 "dma_device_id": "system", 00:06:22.526 "dma_device_type": 1 00:06:22.526 }, 00:06:22.526 { 00:06:22.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.526 "dma_device_type": 2 00:06:22.526 } 00:06:22.526 ], 00:06:22.527 "driver_specific": {} 00:06:22.527 }, 00:06:22.527 { 00:06:22.527 "name": "Passthru0", 00:06:22.527 "aliases": [ 00:06:22.527 "7012f944-7b2e-5c77-b2e0-5c47bf9441ea" 00:06:22.527 ], 00:06:22.527 "product_name": "passthru", 00:06:22.527 "block_size": 512, 00:06:22.527 "num_blocks": 16384, 00:06:22.527 "uuid": "7012f944-7b2e-5c77-b2e0-5c47bf9441ea", 00:06:22.527 "assigned_rate_limits": { 00:06:22.527 "rw_ios_per_sec": 0, 00:06:22.527 "rw_mbytes_per_sec": 0, 00:06:22.527 "r_mbytes_per_sec": 0, 00:06:22.527 "w_mbytes_per_sec": 0 00:06:22.527 }, 00:06:22.527 "claimed": false, 00:06:22.527 "zoned": false, 00:06:22.527 "supported_io_types": { 00:06:22.527 "read": true, 00:06:22.527 "write": true, 00:06:22.527 "unmap": true, 00:06:22.527 "flush": true, 00:06:22.527 "reset": true, 00:06:22.527 "nvme_admin": false, 00:06:22.527 "nvme_io": false, 00:06:22.527 "nvme_io_md": false, 00:06:22.527 "write_zeroes": true, 00:06:22.527 "zcopy": true, 00:06:22.527 "get_zone_info": false, 00:06:22.527 "zone_management": false, 00:06:22.527 "zone_append": false, 00:06:22.527 "compare": false, 00:06:22.527 "compare_and_write": false, 00:06:22.527 "abort": true, 00:06:22.527 "seek_hole": false, 00:06:22.527 "seek_data": false, 00:06:22.527 "copy": true, 00:06:22.527 "nvme_iov_md": false 00:06:22.527 }, 00:06:22.527 "memory_domains": [ 00:06:22.527 { 00:06:22.527 "dma_device_id": "system", 00:06:22.527 "dma_device_type": 1 00:06:22.527 }, 00:06:22.527 { 00:06:22.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.527 "dma_device_type": 2 00:06:22.527 } 00:06:22.527 ], 00:06:22.527 "driver_specific": { 00:06:22.527 "passthru": { 00:06:22.527 "name": "Passthru0", 00:06:22.527 "base_bdev_name": "Malloc0" 00:06:22.527 } 00:06:22.527 } 00:06:22.527 } 00:06:22.527 ]' 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:22.527 ************************************ 00:06:22.527 END TEST rpc_integrity 00:06:22.527 ************************************ 00:06:22.527 09:12:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:22.527 00:06:22.527 real 0m0.346s 00:06:22.527 user 0m0.213s 00:06:22.527 sys 0m0.040s 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.527 09:12:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 09:12:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.786 09:12:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:22.786 09:12:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.786 09:12:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.786 09:12:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 ************************************ 00:06:22.786 START TEST rpc_plugins 00:06:22.786 ************************************ 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:22.786 { 00:06:22.786 "name": "Malloc1", 00:06:22.786 "aliases": [ 00:06:22.786 "fa90cc6f-5735-451e-abf2-b1e04a4e7f12" 00:06:22.786 ], 00:06:22.786 "product_name": "Malloc disk", 00:06:22.786 "block_size": 4096, 00:06:22.786 "num_blocks": 256, 00:06:22.786 "uuid": "fa90cc6f-5735-451e-abf2-b1e04a4e7f12", 00:06:22.786 "assigned_rate_limits": { 00:06:22.786 "rw_ios_per_sec": 0, 00:06:22.786 "rw_mbytes_per_sec": 0, 00:06:22.786 "r_mbytes_per_sec": 0, 00:06:22.786 "w_mbytes_per_sec": 0 00:06:22.786 }, 00:06:22.786 "claimed": false, 00:06:22.786 "zoned": false, 00:06:22.786 "supported_io_types": { 00:06:22.786 "read": true, 00:06:22.786 "write": true, 00:06:22.786 "unmap": true, 00:06:22.786 "flush": true, 00:06:22.786 "reset": true, 00:06:22.786 "nvme_admin": false, 00:06:22.786 "nvme_io": false, 00:06:22.786 "nvme_io_md": false, 00:06:22.786 "write_zeroes": true, 00:06:22.786 "zcopy": true, 00:06:22.786 "get_zone_info": false, 00:06:22.786 "zone_management": false, 00:06:22.786 "zone_append": false, 00:06:22.786 "compare": false, 00:06:22.786 "compare_and_write": false, 00:06:22.786 "abort": true, 00:06:22.786 "seek_hole": false, 00:06:22.786 "seek_data": false, 00:06:22.786 "copy": true, 00:06:22.786 "nvme_iov_md": false 00:06:22.786 }, 00:06:22.786 "memory_domains": [ 00:06:22.786 { 00:06:22.786 "dma_device_id": "system", 00:06:22.786 "dma_device_type": 1 00:06:22.786 }, 00:06:22.786 { 00:06:22.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.786 "dma_device_type": 2 00:06:22.786 } 00:06:22.786 ], 00:06:22.786 "driver_specific": {} 00:06:22.786 } 00:06:22.786 ]' 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:22.786 09:12:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.786 09:12:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 09:12:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.786 09:12:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:22.786 09:12:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.786 09:12:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 09:12:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.786 09:12:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:22.786 09:12:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:22.786 ************************************ 00:06:22.786 END TEST rpc_plugins 00:06:22.786 ************************************ 00:06:22.786 09:12:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:22.786 00:06:22.786 real 0m0.169s 00:06:22.786 user 0m0.107s 00:06:22.786 sys 0m0.021s 00:06:22.786 09:12:09 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.786 09:12:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 09:12:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.786 09:12:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:22.786 09:12:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.786 09:12:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.786 09:12:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.786 ************************************ 00:06:22.786 START TEST rpc_trace_cmd_test 00:06:22.786 ************************************ 00:06:22.786 09:12:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:22.786 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:22.786 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:22.786 09:12:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.786 09:12:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.044 09:12:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.044 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:23.044 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62335", 00:06:23.044 "tpoint_group_mask": "0x8", 00:06:23.044 "iscsi_conn": { 00:06:23.044 "mask": "0x2", 00:06:23.044 "tpoint_mask": "0x0" 00:06:23.044 }, 00:06:23.044 "scsi": { 00:06:23.044 "mask": "0x4", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "bdev": { 00:06:23.045 "mask": "0x8", 00:06:23.045 "tpoint_mask": "0xffffffffffffffff" 00:06:23.045 }, 00:06:23.045 "nvmf_rdma": { 00:06:23.045 "mask": "0x10", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "nvmf_tcp": { 00:06:23.045 "mask": "0x20", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "ftl": { 00:06:23.045 "mask": "0x40", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "blobfs": { 00:06:23.045 "mask": "0x80", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "dsa": { 00:06:23.045 "mask": "0x200", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "thread": { 00:06:23.045 "mask": "0x400", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "nvme_pcie": { 00:06:23.045 "mask": "0x800", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "iaa": { 00:06:23.045 "mask": "0x1000", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "nvme_tcp": { 00:06:23.045 "mask": "0x2000", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "bdev_nvme": { 00:06:23.045 "mask": "0x4000", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 }, 00:06:23.045 "sock": { 00:06:23.045 "mask": "0x8000", 00:06:23.045 "tpoint_mask": "0x0" 00:06:23.045 } 00:06:23.045 }' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:23.045 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:23.303 ************************************ 00:06:23.303 END TEST rpc_trace_cmd_test 00:06:23.303 ************************************ 00:06:23.303 09:12:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:23.303 00:06:23.303 real 0m0.281s 00:06:23.303 user 0m0.247s 00:06:23.303 sys 0m0.024s 00:06:23.303 09:12:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.303 09:12:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.303 09:12:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:23.303 09:12:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:23.304 09:12:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:23.304 09:12:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:23.304 09:12:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.304 09:12:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.304 09:12:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 ************************************ 00:06:23.304 START TEST rpc_daemon_integrity 00:06:23.304 ************************************ 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:23.304 { 00:06:23.304 "name": "Malloc2", 00:06:23.304 "aliases": [ 00:06:23.304 "a5aa5d5d-7298-4755-add1-8fb93eb81468" 00:06:23.304 ], 00:06:23.304 "product_name": "Malloc disk", 00:06:23.304 "block_size": 512, 00:06:23.304 "num_blocks": 16384, 00:06:23.304 "uuid": "a5aa5d5d-7298-4755-add1-8fb93eb81468", 00:06:23.304 "assigned_rate_limits": { 00:06:23.304 "rw_ios_per_sec": 0, 00:06:23.304 "rw_mbytes_per_sec": 0, 00:06:23.304 "r_mbytes_per_sec": 0, 00:06:23.304 "w_mbytes_per_sec": 0 00:06:23.304 }, 00:06:23.304 "claimed": false, 00:06:23.304 "zoned": false, 00:06:23.304 "supported_io_types": { 00:06:23.304 "read": true, 00:06:23.304 "write": true, 00:06:23.304 "unmap": true, 00:06:23.304 "flush": true, 00:06:23.304 "reset": true, 00:06:23.304 "nvme_admin": false, 00:06:23.304 "nvme_io": false, 00:06:23.304 "nvme_io_md": false, 00:06:23.304 "write_zeroes": true, 00:06:23.304 "zcopy": true, 00:06:23.304 "get_zone_info": false, 00:06:23.304 "zone_management": false, 00:06:23.304 "zone_append": false, 00:06:23.304 "compare": false, 00:06:23.304 "compare_and_write": false, 00:06:23.304 "abort": true, 00:06:23.304 "seek_hole": false, 00:06:23.304 "seek_data": false, 00:06:23.304 "copy": true, 00:06:23.304 "nvme_iov_md": false 00:06:23.304 }, 00:06:23.304 "memory_domains": [ 00:06:23.304 { 00:06:23.304 "dma_device_id": "system", 00:06:23.304 "dma_device_type": 1 00:06:23.304 }, 00:06:23.304 { 00:06:23.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.304 "dma_device_type": 2 00:06:23.304 } 00:06:23.304 ], 00:06:23.304 "driver_specific": {} 00:06:23.304 } 00:06:23.304 ]' 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 [2024-07-12 09:12:09.616401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:23.304 [2024-07-12 09:12:09.616480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.304 [2024-07-12 09:12:09.616517] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:23.304 [2024-07-12 09:12:09.616532] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.304 [2024-07-12 09:12:09.619197] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.304 [2024-07-12 09:12:09.619246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:23.304 Passthru0 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.304 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:23.563 { 00:06:23.563 "name": "Malloc2", 00:06:23.563 "aliases": [ 00:06:23.563 "a5aa5d5d-7298-4755-add1-8fb93eb81468" 00:06:23.563 ], 00:06:23.563 "product_name": "Malloc disk", 00:06:23.563 "block_size": 512, 00:06:23.563 "num_blocks": 16384, 00:06:23.563 "uuid": "a5aa5d5d-7298-4755-add1-8fb93eb81468", 00:06:23.563 "assigned_rate_limits": { 00:06:23.563 "rw_ios_per_sec": 0, 00:06:23.563 "rw_mbytes_per_sec": 0, 00:06:23.563 "r_mbytes_per_sec": 0, 00:06:23.563 "w_mbytes_per_sec": 0 00:06:23.563 }, 00:06:23.563 "claimed": true, 00:06:23.563 "claim_type": "exclusive_write", 00:06:23.563 "zoned": false, 00:06:23.563 "supported_io_types": { 00:06:23.563 "read": true, 00:06:23.563 "write": true, 00:06:23.563 "unmap": true, 00:06:23.563 "flush": true, 00:06:23.563 "reset": true, 00:06:23.563 "nvme_admin": false, 00:06:23.563 "nvme_io": false, 00:06:23.563 "nvme_io_md": false, 00:06:23.563 "write_zeroes": true, 00:06:23.563 "zcopy": true, 00:06:23.563 "get_zone_info": false, 00:06:23.563 "zone_management": false, 00:06:23.563 "zone_append": false, 00:06:23.563 "compare": false, 00:06:23.563 "compare_and_write": false, 00:06:23.563 "abort": true, 00:06:23.563 "seek_hole": false, 00:06:23.563 "seek_data": false, 00:06:23.563 "copy": true, 00:06:23.563 "nvme_iov_md": false 00:06:23.563 }, 00:06:23.563 "memory_domains": [ 00:06:23.563 { 00:06:23.563 "dma_device_id": "system", 00:06:23.563 "dma_device_type": 1 00:06:23.563 }, 00:06:23.563 { 00:06:23.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.563 "dma_device_type": 2 00:06:23.563 } 00:06:23.563 ], 00:06:23.563 "driver_specific": {} 00:06:23.563 }, 00:06:23.563 { 00:06:23.563 "name": "Passthru0", 00:06:23.563 "aliases": [ 00:06:23.563 "48dcc408-9f9f-5d60-977a-ee11c426b3ba" 00:06:23.563 ], 00:06:23.563 "product_name": "passthru", 00:06:23.563 "block_size": 512, 00:06:23.563 "num_blocks": 16384, 00:06:23.563 "uuid": "48dcc408-9f9f-5d60-977a-ee11c426b3ba", 00:06:23.563 "assigned_rate_limits": { 00:06:23.563 "rw_ios_per_sec": 0, 00:06:23.563 "rw_mbytes_per_sec": 0, 00:06:23.563 "r_mbytes_per_sec": 0, 00:06:23.563 "w_mbytes_per_sec": 0 00:06:23.563 }, 00:06:23.563 "claimed": false, 00:06:23.563 "zoned": false, 00:06:23.563 "supported_io_types": { 00:06:23.563 "read": true, 00:06:23.563 "write": true, 00:06:23.563 "unmap": true, 00:06:23.563 "flush": true, 00:06:23.563 "reset": true, 00:06:23.563 "nvme_admin": false, 00:06:23.563 "nvme_io": false, 00:06:23.563 "nvme_io_md": false, 00:06:23.563 "write_zeroes": true, 00:06:23.563 "zcopy": true, 00:06:23.563 "get_zone_info": false, 00:06:23.563 "zone_management": false, 00:06:23.563 "zone_append": false, 00:06:23.563 "compare": false, 00:06:23.563 "compare_and_write": false, 00:06:23.563 "abort": true, 00:06:23.563 "seek_hole": false, 00:06:23.563 "seek_data": false, 00:06:23.563 "copy": true, 00:06:23.563 "nvme_iov_md": false 00:06:23.563 }, 00:06:23.563 "memory_domains": [ 00:06:23.563 { 00:06:23.563 "dma_device_id": "system", 00:06:23.563 "dma_device_type": 1 00:06:23.563 }, 00:06:23.563 { 00:06:23.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.563 "dma_device_type": 2 00:06:23.563 } 00:06:23.563 ], 00:06:23.563 "driver_specific": { 00:06:23.563 "passthru": { 00:06:23.563 "name": "Passthru0", 00:06:23.563 "base_bdev_name": "Malloc2" 00:06:23.563 } 00:06:23.563 } 00:06:23.563 } 00:06:23.563 ]' 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.563 ************************************ 00:06:23.563 END TEST rpc_daemon_integrity 00:06:23.563 ************************************ 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.563 00:06:23.563 real 0m0.361s 00:06:23.563 user 0m0.226s 00:06:23.563 sys 0m0.040s 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.563 09:12:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:23.563 09:12:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:23.563 09:12:09 rpc -- rpc/rpc.sh@84 -- # killprocess 62335 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@948 -- # '[' -z 62335 ']' 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@952 -- # kill -0 62335 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@953 -- # uname 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62335 00:06:23.563 killing process with pid 62335 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62335' 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@967 -- # kill 62335 00:06:23.563 09:12:09 rpc -- common/autotest_common.sh@972 -- # wait 62335 00:06:26.091 ************************************ 00:06:26.091 END TEST rpc 00:06:26.091 ************************************ 00:06:26.091 00:06:26.091 real 0m4.702s 00:06:26.091 user 0m5.556s 00:06:26.091 sys 0m0.705s 00:06:26.091 09:12:11 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.091 09:12:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.091 09:12:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.091 09:12:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:26.091 09:12:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.091 09:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.091 09:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:26.091 ************************************ 00:06:26.091 START TEST skip_rpc 00:06:26.091 ************************************ 00:06:26.091 09:12:12 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:26.091 * Looking for test storage... 00:06:26.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.091 09:12:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:26.091 09:12:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:26.091 09:12:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:26.091 09:12:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.091 09:12:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.091 09:12:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.091 ************************************ 00:06:26.091 START TEST skip_rpc 00:06:26.091 ************************************ 00:06:26.091 09:12:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:26.091 09:12:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62556 00:06:26.091 09:12:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:26.091 09:12:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.091 09:12:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:26.091 [2024-07-12 09:12:12.221431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:26.091 [2024-07-12 09:12:12.221611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62556 ] 00:06:26.091 [2024-07-12 09:12:12.396911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.350 [2024-07-12 09:12:12.625712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.610 09:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:31.610 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62556 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62556 ']' 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62556 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62556 00:06:31.611 killing process with pid 62556 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62556' 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62556 00:06:31.611 09:12:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62556 00:06:32.985 00:06:32.985 real 0m7.114s 00:06:32.985 user 0m6.670s 00:06:32.985 sys 0m0.330s 00:06:32.985 09:12:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.985 09:12:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.985 ************************************ 00:06:32.985 END TEST skip_rpc 00:06:32.985 ************************************ 00:06:32.985 09:12:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:32.985 09:12:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:32.985 09:12:19 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.985 09:12:19 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.985 09:12:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.985 ************************************ 00:06:32.985 START TEST skip_rpc_with_json 00:06:32.985 ************************************ 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:32.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62660 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62660 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62660 ']' 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.985 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.986 09:12:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.244 [2024-07-12 09:12:19.376717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:33.244 [2024-07-12 09:12:19.377092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62660 ] 00:06:33.244 [2024-07-12 09:12:19.542003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.502 [2024-07-12 09:12:19.721102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.438 [2024-07-12 09:12:20.434426] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:34.438 request: 00:06:34.438 { 00:06:34.438 "trtype": "tcp", 00:06:34.438 "method": "nvmf_get_transports", 00:06:34.438 "req_id": 1 00:06:34.438 } 00:06:34.438 Got JSON-RPC error response 00:06:34.438 response: 00:06:34.438 { 00:06:34.438 "code": -19, 00:06:34.438 "message": "No such device" 00:06:34.438 } 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.438 [2024-07-12 09:12:20.446598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.438 09:12:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:34.438 { 00:06:34.438 "subsystems": [ 00:06:34.438 { 00:06:34.438 "subsystem": "keyring", 00:06:34.438 "config": [] 00:06:34.438 }, 00:06:34.438 { 00:06:34.438 "subsystem": "iobuf", 00:06:34.438 "config": [ 00:06:34.438 { 00:06:34.438 "method": "iobuf_set_options", 00:06:34.438 "params": { 00:06:34.439 "small_pool_count": 8192, 00:06:34.439 "large_pool_count": 1024, 00:06:34.439 "small_bufsize": 8192, 00:06:34.439 "large_bufsize": 135168 00:06:34.439 } 00:06:34.439 } 00:06:34.439 ] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "sock", 00:06:34.439 "config": [ 00:06:34.439 { 00:06:34.439 "method": "sock_set_default_impl", 00:06:34.439 "params": { 00:06:34.439 "impl_name": "posix" 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "sock_impl_set_options", 00:06:34.439 "params": { 00:06:34.439 "impl_name": "ssl", 00:06:34.439 "recv_buf_size": 4096, 00:06:34.439 "send_buf_size": 4096, 00:06:34.439 "enable_recv_pipe": true, 00:06:34.439 "enable_quickack": false, 00:06:34.439 "enable_placement_id": 0, 00:06:34.439 "enable_zerocopy_send_server": true, 00:06:34.439 "enable_zerocopy_send_client": false, 00:06:34.439 "zerocopy_threshold": 0, 00:06:34.439 "tls_version": 0, 00:06:34.439 "enable_ktls": false 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "sock_impl_set_options", 00:06:34.439 "params": { 00:06:34.439 "impl_name": "posix", 00:06:34.439 "recv_buf_size": 2097152, 00:06:34.439 "send_buf_size": 2097152, 00:06:34.439 "enable_recv_pipe": true, 00:06:34.439 "enable_quickack": false, 00:06:34.439 "enable_placement_id": 0, 00:06:34.439 "enable_zerocopy_send_server": true, 00:06:34.439 "enable_zerocopy_send_client": false, 00:06:34.439 "zerocopy_threshold": 0, 00:06:34.439 "tls_version": 0, 00:06:34.439 "enable_ktls": false 00:06:34.439 } 00:06:34.439 } 00:06:34.439 ] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "vmd", 00:06:34.439 "config": [] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "accel", 00:06:34.439 "config": [ 00:06:34.439 { 00:06:34.439 "method": "accel_set_options", 00:06:34.439 "params": { 00:06:34.439 "small_cache_size": 128, 00:06:34.439 "large_cache_size": 16, 00:06:34.439 "task_count": 2048, 00:06:34.439 "sequence_count": 2048, 00:06:34.439 "buf_count": 2048 00:06:34.439 } 00:06:34.439 } 00:06:34.439 ] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "bdev", 00:06:34.439 "config": [ 00:06:34.439 { 00:06:34.439 "method": "bdev_set_options", 00:06:34.439 "params": { 00:06:34.439 "bdev_io_pool_size": 65535, 00:06:34.439 "bdev_io_cache_size": 256, 00:06:34.439 "bdev_auto_examine": true, 00:06:34.439 "iobuf_small_cache_size": 128, 00:06:34.439 "iobuf_large_cache_size": 16 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "bdev_raid_set_options", 00:06:34.439 "params": { 00:06:34.439 "process_window_size_kb": 1024 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "bdev_iscsi_set_options", 00:06:34.439 "params": { 00:06:34.439 "timeout_sec": 30 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "bdev_nvme_set_options", 00:06:34.439 "params": { 00:06:34.439 "action_on_timeout": "none", 00:06:34.439 "timeout_us": 0, 00:06:34.439 "timeout_admin_us": 0, 00:06:34.439 "keep_alive_timeout_ms": 10000, 00:06:34.439 "arbitration_burst": 0, 00:06:34.439 "low_priority_weight": 0, 00:06:34.439 "medium_priority_weight": 0, 00:06:34.439 "high_priority_weight": 0, 00:06:34.439 "nvme_adminq_poll_period_us": 10000, 00:06:34.439 "nvme_ioq_poll_period_us": 0, 00:06:34.439 "io_queue_requests": 0, 00:06:34.439 "delay_cmd_submit": true, 00:06:34.439 "transport_retry_count": 4, 00:06:34.439 "bdev_retry_count": 3, 00:06:34.439 "transport_ack_timeout": 0, 00:06:34.439 "ctrlr_loss_timeout_sec": 0, 00:06:34.439 "reconnect_delay_sec": 0, 00:06:34.439 "fast_io_fail_timeout_sec": 0, 00:06:34.439 "disable_auto_failback": false, 00:06:34.439 "generate_uuids": false, 00:06:34.439 "transport_tos": 0, 00:06:34.439 "nvme_error_stat": false, 00:06:34.439 "rdma_srq_size": 0, 00:06:34.439 "io_path_stat": false, 00:06:34.439 "allow_accel_sequence": false, 00:06:34.439 "rdma_max_cq_size": 0, 00:06:34.439 "rdma_cm_event_timeout_ms": 0, 00:06:34.439 "dhchap_digests": [ 00:06:34.439 "sha256", 00:06:34.439 "sha384", 00:06:34.439 "sha512" 00:06:34.439 ], 00:06:34.439 "dhchap_dhgroups": [ 00:06:34.439 "null", 00:06:34.439 "ffdhe2048", 00:06:34.439 "ffdhe3072", 00:06:34.439 "ffdhe4096", 00:06:34.439 "ffdhe6144", 00:06:34.439 "ffdhe8192" 00:06:34.439 ] 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "bdev_nvme_set_hotplug", 00:06:34.439 "params": { 00:06:34.439 "period_us": 100000, 00:06:34.439 "enable": false 00:06:34.439 } 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "method": "bdev_wait_for_examine" 00:06:34.439 } 00:06:34.439 ] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "scsi", 00:06:34.439 "config": null 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "scheduler", 00:06:34.439 "config": [ 00:06:34.439 { 00:06:34.439 "method": "framework_set_scheduler", 00:06:34.439 "params": { 00:06:34.439 "name": "static" 00:06:34.439 } 00:06:34.439 } 00:06:34.439 ] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "vhost_scsi", 00:06:34.439 "config": [] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "vhost_blk", 00:06:34.439 "config": [] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "ublk", 00:06:34.439 "config": [] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "nbd", 00:06:34.439 "config": [] 00:06:34.439 }, 00:06:34.439 { 00:06:34.439 "subsystem": "nvmf", 00:06:34.439 "config": [ 00:06:34.439 { 00:06:34.439 "method": "nvmf_set_config", 00:06:34.439 "params": { 00:06:34.439 "discovery_filter": "match_any", 00:06:34.439 "admin_cmd_passthru": { 00:06:34.439 "identify_ctrlr": false 00:06:34.440 } 00:06:34.440 } 00:06:34.440 }, 00:06:34.440 { 00:06:34.440 "method": "nvmf_set_max_subsystems", 00:06:34.440 "params": { 00:06:34.440 "max_subsystems": 1024 00:06:34.440 } 00:06:34.440 }, 00:06:34.440 { 00:06:34.440 "method": "nvmf_set_crdt", 00:06:34.440 "params": { 00:06:34.440 "crdt1": 0, 00:06:34.440 "crdt2": 0, 00:06:34.440 "crdt3": 0 00:06:34.440 } 00:06:34.440 }, 00:06:34.440 { 00:06:34.440 "method": "nvmf_create_transport", 00:06:34.440 "params": { 00:06:34.440 "trtype": "TCP", 00:06:34.440 "max_queue_depth": 128, 00:06:34.440 "max_io_qpairs_per_ctrlr": 127, 00:06:34.440 "in_capsule_data_size": 4096, 00:06:34.440 "max_io_size": 131072, 00:06:34.440 "io_unit_size": 131072, 00:06:34.440 "max_aq_depth": 128, 00:06:34.440 "num_shared_buffers": 511, 00:06:34.440 "buf_cache_size": 4294967295, 00:06:34.440 "dif_insert_or_strip": false, 00:06:34.440 "zcopy": false, 00:06:34.440 "c2h_success": true, 00:06:34.440 "sock_priority": 0, 00:06:34.440 "abort_timeout_sec": 1, 00:06:34.440 "ack_timeout": 0, 00:06:34.440 "data_wr_pool_size": 0 00:06:34.440 } 00:06:34.440 } 00:06:34.440 ] 00:06:34.440 }, 00:06:34.440 { 00:06:34.440 "subsystem": "iscsi", 00:06:34.440 "config": [ 00:06:34.440 { 00:06:34.440 "method": "iscsi_set_options", 00:06:34.440 "params": { 00:06:34.440 "node_base": "iqn.2016-06.io.spdk", 00:06:34.440 "max_sessions": 128, 00:06:34.440 "max_connections_per_session": 2, 00:06:34.440 "max_queue_depth": 64, 00:06:34.440 "default_time2wait": 2, 00:06:34.440 "default_time2retain": 20, 00:06:34.440 "first_burst_length": 8192, 00:06:34.440 "immediate_data": true, 00:06:34.440 "allow_duplicated_isid": false, 00:06:34.440 "error_recovery_level": 0, 00:06:34.440 "nop_timeout": 60, 00:06:34.440 "nop_in_interval": 30, 00:06:34.440 "disable_chap": false, 00:06:34.440 "require_chap": false, 00:06:34.440 "mutual_chap": false, 00:06:34.440 "chap_group": 0, 00:06:34.440 "max_large_datain_per_connection": 64, 00:06:34.440 "max_r2t_per_connection": 4, 00:06:34.440 "pdu_pool_size": 36864, 00:06:34.440 "immediate_data_pool_size": 16384, 00:06:34.440 "data_out_pool_size": 2048 00:06:34.440 } 00:06:34.440 } 00:06:34.440 ] 00:06:34.440 } 00:06:34.440 ] 00:06:34.440 } 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62660 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62660 ']' 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62660 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62660 00:06:34.440 killing process with pid 62660 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62660' 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62660 00:06:34.440 09:12:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62660 00:06:36.967 09:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62705 00:06:36.967 09:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.967 09:12:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62705 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62705 ']' 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62705 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62705 00:06:42.229 killing process with pid 62705 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62705' 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62705 00:06:42.229 09:12:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62705 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:43.602 ************************************ 00:06:43.602 END TEST skip_rpc_with_json 00:06:43.602 ************************************ 00:06:43.602 00:06:43.602 real 0m10.546s 00:06:43.602 user 0m10.197s 00:06:43.602 sys 0m0.693s 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.602 09:12:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:43.602 09:12:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:43.602 09:12:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.602 09:12:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.602 09:12:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.602 ************************************ 00:06:43.602 START TEST skip_rpc_with_delay 00:06:43.602 ************************************ 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:43.602 09:12:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.860 [2024-07-12 09:12:29.992814] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:43.860 [2024-07-12 09:12:29.993032] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:43.860 09:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:43.860 09:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.860 09:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.860 09:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.860 00:06:43.860 real 0m0.187s 00:06:43.860 user 0m0.114s 00:06:43.860 sys 0m0.072s 00:06:43.860 09:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.860 ************************************ 00:06:43.860 END TEST skip_rpc_with_delay 00:06:43.860 ************************************ 00:06:43.860 09:12:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 09:12:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:43.860 09:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:43.860 09:12:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:43.860 09:12:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:43.860 09:12:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.860 09:12:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.860 09:12:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 ************************************ 00:06:43.860 START TEST exit_on_failed_rpc_init 00:06:43.860 ************************************ 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62835 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62835 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62835 ']' 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.860 09:12:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:44.118 [2024-07-12 09:12:30.217375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:44.118 [2024-07-12 09:12:30.218131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62835 ] 00:06:44.118 [2024-07-12 09:12:30.382243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.375 [2024-07-12 09:12:30.567681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:44.940 09:12:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.197 [2024-07-12 09:12:31.427928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.197 [2024-07-12 09:12:31.428092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62853 ] 00:06:45.455 [2024-07-12 09:12:31.599291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.455 [2024-07-12 09:12:31.787127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.455 [2024-07-12 09:12:31.787252] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:45.455 [2024-07-12 09:12:31.787278] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:45.455 [2024-07-12 09:12:31.787294] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62835 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62835 ']' 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62835 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62835 00:06:46.020 killing process with pid 62835 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62835' 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62835 00:06:46.020 09:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62835 00:06:48.548 00:06:48.548 real 0m4.211s 00:06:48.548 user 0m4.953s 00:06:48.548 sys 0m0.503s 00:06:48.548 09:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.549 ************************************ 00:06:48.549 END TEST exit_on_failed_rpc_init 00:06:48.549 ************************************ 00:06:48.549 09:12:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 09:12:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:48.549 09:12:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.549 ************************************ 00:06:48.549 END TEST skip_rpc 00:06:48.549 ************************************ 00:06:48.549 00:06:48.549 real 0m22.351s 00:06:48.549 user 0m22.032s 00:06:48.549 sys 0m1.775s 00:06:48.549 09:12:34 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.549 09:12:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 09:12:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.549 09:12:34 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:48.549 09:12:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.549 09:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.549 09:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 ************************************ 00:06:48.549 START TEST rpc_client 00:06:48.549 ************************************ 00:06:48.549 09:12:34 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:48.549 * Looking for test storage... 00:06:48.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:48.549 09:12:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:48.549 OK 00:06:48.549 09:12:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:48.549 00:06:48.549 real 0m0.140s 00:06:48.549 user 0m0.068s 00:06:48.549 sys 0m0.077s 00:06:48.549 09:12:34 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.549 09:12:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 ************************************ 00:06:48.549 END TEST rpc_client 00:06:48.549 ************************************ 00:06:48.549 09:12:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.549 09:12:34 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.549 09:12:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.549 09:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.549 09:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 ************************************ 00:06:48.549 START TEST json_config 00:06:48.549 ************************************ 00:06:48.549 09:12:34 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a76d6b3-a7b4-4a82-9516-c6e99e966a66 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1a76d6b3-a7b4-4a82-9516-c6e99e966a66 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.549 09:12:34 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.549 09:12:34 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.549 09:12:34 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.549 09:12:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.549 09:12:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.549 09:12:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.549 09:12:34 json_config -- paths/export.sh@5 -- # export PATH 00:06:48.549 09:12:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@47 -- # : 0 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.549 09:12:34 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:48.549 WARNING: No tests are enabled so not running JSON configuration tests 00:06:48.549 09:12:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:48.549 00:06:48.549 real 0m0.074s 00:06:48.549 user 0m0.036s 00:06:48.549 sys 0m0.036s 00:06:48.549 09:12:34 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.549 09:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 ************************************ 00:06:48.549 END TEST json_config 00:06:48.549 ************************************ 00:06:48.549 09:12:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.549 09:12:34 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.549 09:12:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.549 09:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.549 09:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:48.549 ************************************ 00:06:48.549 START TEST json_config_extra_key 00:06:48.549 ************************************ 00:06:48.549 09:12:34 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.549 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1a76d6b3-a7b4-4a82-9516-c6e99e966a66 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1a76d6b3-a7b4-4a82-9516-c6e99e966a66 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.549 09:12:34 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.549 09:12:34 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.549 09:12:34 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.549 09:12:34 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.549 09:12:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.550 09:12:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.550 09:12:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.550 09:12:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:48.550 09:12:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.550 09:12:34 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:48.550 INFO: launching applications... 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:48.550 09:12:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63040 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:48.550 Waiting for target to run... 00:06:48.550 09:12:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63040 /var/tmp/spdk_tgt.sock 00:06:48.550 09:12:34 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 63040 ']' 00:06:48.550 09:12:34 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:48.550 09:12:34 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.550 09:12:34 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:48.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:48.550 09:12:34 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.550 09:12:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.808 [2024-07-12 09:12:34.934523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:48.808 [2024-07-12 09:12:34.934965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63040 ] 00:06:49.066 [2024-07-12 09:12:35.272792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.325 [2024-07-12 09:12:35.477498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.891 09:12:36 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.891 09:12:36 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:49.891 00:06:49.891 09:12:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:49.891 INFO: shutting down applications... 00:06:49.891 09:12:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63040 ]] 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63040 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.891 09:12:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.892 09:12:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:06:49.892 09:12:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.459 09:12:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.459 09:12:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.459 09:12:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:06:50.459 09:12:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.026 09:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.026 09:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.026 09:12:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:06:51.026 09:12:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.284 09:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.284 09:12:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.284 09:12:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:06:51.284 09:12:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.851 09:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.851 09:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.851 09:12:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:06:51.851 09:12:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63040 00:06:52.418 SPDK target shutdown done 00:06:52.418 Success 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:52.418 09:12:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:52.418 09:12:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:52.418 00:06:52.418 real 0m3.896s 00:06:52.418 user 0m3.773s 00:06:52.418 sys 0m0.469s 00:06:52.418 ************************************ 00:06:52.418 END TEST json_config_extra_key 00:06:52.418 ************************************ 00:06:52.418 09:12:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.418 09:12:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:52.418 09:12:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.418 09:12:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:52.418 09:12:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.418 09:12:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.418 09:12:38 -- common/autotest_common.sh@10 -- # set +x 00:06:52.418 ************************************ 00:06:52.418 START TEST alias_rpc 00:06:52.418 ************************************ 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:52.418 * Looking for test storage... 00:06:52.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:52.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.418 09:12:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.418 09:12:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63136 00:06:52.418 09:12:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63136 00:06:52.418 09:12:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 63136 ']' 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.418 09:12:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.678 [2024-07-12 09:12:38.858595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:52.678 [2024-07-12 09:12:38.858779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63136 ] 00:06:52.936 [2024-07-12 09:12:39.030729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.936 [2024-07-12 09:12:39.211960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.871 09:12:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.871 09:12:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.871 09:12:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:53.871 09:12:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63136 00:06:53.871 09:12:40 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 63136 ']' 00:06:53.871 09:12:40 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 63136 00:06:53.871 09:12:40 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63136 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.129 killing process with pid 63136 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63136' 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@967 -- # kill 63136 00:06:54.129 09:12:40 alias_rpc -- common/autotest_common.sh@972 -- # wait 63136 00:06:56.029 00:06:56.029 real 0m3.688s 00:06:56.029 user 0m3.931s 00:06:56.029 sys 0m0.444s 00:06:56.029 ************************************ 00:06:56.029 END TEST alias_rpc 00:06:56.029 ************************************ 00:06:56.029 09:12:42 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.029 09:12:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.286 09:12:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.286 09:12:42 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:56.286 09:12:42 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:56.286 09:12:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.286 09:12:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.286 09:12:42 -- common/autotest_common.sh@10 -- # set +x 00:06:56.286 ************************************ 00:06:56.286 START TEST spdkcli_tcp 00:06:56.286 ************************************ 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:56.286 * Looking for test storage... 00:06:56.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63225 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:56.286 09:12:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63225 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63225 ']' 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.286 09:12:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.286 [2024-07-12 09:12:42.603074] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.286 [2024-07-12 09:12:42.603264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63225 ] 00:06:56.544 [2024-07-12 09:12:42.773631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.800 [2024-07-12 09:12:42.962596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.800 [2024-07-12 09:12:42.962604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.363 09:12:43 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.363 09:12:43 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:57.363 09:12:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63248 00:06:57.363 09:12:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:57.363 09:12:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:57.621 [ 00:06:57.621 "bdev_malloc_delete", 00:06:57.621 "bdev_malloc_create", 00:06:57.621 "bdev_null_resize", 00:06:57.621 "bdev_null_delete", 00:06:57.621 "bdev_null_create", 00:06:57.621 "bdev_nvme_cuse_unregister", 00:06:57.621 "bdev_nvme_cuse_register", 00:06:57.621 "bdev_opal_new_user", 00:06:57.621 "bdev_opal_set_lock_state", 00:06:57.621 "bdev_opal_delete", 00:06:57.621 "bdev_opal_get_info", 00:06:57.621 "bdev_opal_create", 00:06:57.621 "bdev_nvme_opal_revert", 00:06:57.621 "bdev_nvme_opal_init", 00:06:57.621 "bdev_nvme_send_cmd", 00:06:57.621 "bdev_nvme_get_path_iostat", 00:06:57.621 "bdev_nvme_get_mdns_discovery_info", 00:06:57.621 "bdev_nvme_stop_mdns_discovery", 00:06:57.621 "bdev_nvme_start_mdns_discovery", 00:06:57.621 "bdev_nvme_set_multipath_policy", 00:06:57.621 "bdev_nvme_set_preferred_path", 00:06:57.621 "bdev_nvme_get_io_paths", 00:06:57.621 "bdev_nvme_remove_error_injection", 00:06:57.621 "bdev_nvme_add_error_injection", 00:06:57.621 "bdev_nvme_get_discovery_info", 00:06:57.621 "bdev_nvme_stop_discovery", 00:06:57.621 "bdev_nvme_start_discovery", 00:06:57.621 "bdev_nvme_get_controller_health_info", 00:06:57.621 "bdev_nvme_disable_controller", 00:06:57.621 "bdev_nvme_enable_controller", 00:06:57.621 "bdev_nvme_reset_controller", 00:06:57.621 "bdev_nvme_get_transport_statistics", 00:06:57.621 "bdev_nvme_apply_firmware", 00:06:57.621 "bdev_nvme_detach_controller", 00:06:57.621 "bdev_nvme_get_controllers", 00:06:57.621 "bdev_nvme_attach_controller", 00:06:57.622 "bdev_nvme_set_hotplug", 00:06:57.622 "bdev_nvme_set_options", 00:06:57.622 "bdev_passthru_delete", 00:06:57.622 "bdev_passthru_create", 00:06:57.622 "bdev_lvol_set_parent_bdev", 00:06:57.622 "bdev_lvol_set_parent", 00:06:57.622 "bdev_lvol_check_shallow_copy", 00:06:57.622 "bdev_lvol_start_shallow_copy", 00:06:57.622 "bdev_lvol_grow_lvstore", 00:06:57.622 "bdev_lvol_get_lvols", 00:06:57.622 "bdev_lvol_get_lvstores", 00:06:57.622 "bdev_lvol_delete", 00:06:57.622 "bdev_lvol_set_read_only", 00:06:57.622 "bdev_lvol_resize", 00:06:57.622 "bdev_lvol_decouple_parent", 00:06:57.622 "bdev_lvol_inflate", 00:06:57.622 "bdev_lvol_rename", 00:06:57.622 "bdev_lvol_clone_bdev", 00:06:57.622 "bdev_lvol_clone", 00:06:57.622 "bdev_lvol_snapshot", 00:06:57.622 "bdev_lvol_create", 00:06:57.622 "bdev_lvol_delete_lvstore", 00:06:57.622 "bdev_lvol_rename_lvstore", 00:06:57.622 "bdev_lvol_create_lvstore", 00:06:57.622 "bdev_raid_set_options", 00:06:57.622 "bdev_raid_remove_base_bdev", 00:06:57.622 "bdev_raid_add_base_bdev", 00:06:57.622 "bdev_raid_delete", 00:06:57.622 "bdev_raid_create", 00:06:57.622 "bdev_raid_get_bdevs", 00:06:57.622 "bdev_error_inject_error", 00:06:57.622 "bdev_error_delete", 00:06:57.622 "bdev_error_create", 00:06:57.622 "bdev_split_delete", 00:06:57.622 "bdev_split_create", 00:06:57.622 "bdev_delay_delete", 00:06:57.622 "bdev_delay_create", 00:06:57.622 "bdev_delay_update_latency", 00:06:57.622 "bdev_zone_block_delete", 00:06:57.622 "bdev_zone_block_create", 00:06:57.622 "blobfs_create", 00:06:57.622 "blobfs_detect", 00:06:57.622 "blobfs_set_cache_size", 00:06:57.622 "bdev_xnvme_delete", 00:06:57.622 "bdev_xnvme_create", 00:06:57.622 "bdev_aio_delete", 00:06:57.622 "bdev_aio_rescan", 00:06:57.622 "bdev_aio_create", 00:06:57.622 "bdev_ftl_set_property", 00:06:57.622 "bdev_ftl_get_properties", 00:06:57.622 "bdev_ftl_get_stats", 00:06:57.622 "bdev_ftl_unmap", 00:06:57.622 "bdev_ftl_unload", 00:06:57.622 "bdev_ftl_delete", 00:06:57.622 "bdev_ftl_load", 00:06:57.622 "bdev_ftl_create", 00:06:57.622 "bdev_virtio_attach_controller", 00:06:57.622 "bdev_virtio_scsi_get_devices", 00:06:57.622 "bdev_virtio_detach_controller", 00:06:57.622 "bdev_virtio_blk_set_hotplug", 00:06:57.622 "bdev_iscsi_delete", 00:06:57.622 "bdev_iscsi_create", 00:06:57.622 "bdev_iscsi_set_options", 00:06:57.622 "accel_error_inject_error", 00:06:57.622 "ioat_scan_accel_module", 00:06:57.622 "dsa_scan_accel_module", 00:06:57.622 "iaa_scan_accel_module", 00:06:57.622 "keyring_file_remove_key", 00:06:57.622 "keyring_file_add_key", 00:06:57.622 "keyring_linux_set_options", 00:06:57.622 "iscsi_get_histogram", 00:06:57.622 "iscsi_enable_histogram", 00:06:57.622 "iscsi_set_options", 00:06:57.622 "iscsi_get_auth_groups", 00:06:57.622 "iscsi_auth_group_remove_secret", 00:06:57.622 "iscsi_auth_group_add_secret", 00:06:57.622 "iscsi_delete_auth_group", 00:06:57.622 "iscsi_create_auth_group", 00:06:57.622 "iscsi_set_discovery_auth", 00:06:57.622 "iscsi_get_options", 00:06:57.622 "iscsi_target_node_request_logout", 00:06:57.622 "iscsi_target_node_set_redirect", 00:06:57.622 "iscsi_target_node_set_auth", 00:06:57.622 "iscsi_target_node_add_lun", 00:06:57.622 "iscsi_get_stats", 00:06:57.622 "iscsi_get_connections", 00:06:57.622 "iscsi_portal_group_set_auth", 00:06:57.622 "iscsi_start_portal_group", 00:06:57.622 "iscsi_delete_portal_group", 00:06:57.622 "iscsi_create_portal_group", 00:06:57.622 "iscsi_get_portal_groups", 00:06:57.622 "iscsi_delete_target_node", 00:06:57.622 "iscsi_target_node_remove_pg_ig_maps", 00:06:57.622 "iscsi_target_node_add_pg_ig_maps", 00:06:57.622 "iscsi_create_target_node", 00:06:57.622 "iscsi_get_target_nodes", 00:06:57.622 "iscsi_delete_initiator_group", 00:06:57.622 "iscsi_initiator_group_remove_initiators", 00:06:57.622 "iscsi_initiator_group_add_initiators", 00:06:57.622 "iscsi_create_initiator_group", 00:06:57.622 "iscsi_get_initiator_groups", 00:06:57.622 "nvmf_set_crdt", 00:06:57.622 "nvmf_set_config", 00:06:57.622 "nvmf_set_max_subsystems", 00:06:57.622 "nvmf_stop_mdns_prr", 00:06:57.622 "nvmf_publish_mdns_prr", 00:06:57.622 "nvmf_subsystem_get_listeners", 00:06:57.622 "nvmf_subsystem_get_qpairs", 00:06:57.622 "nvmf_subsystem_get_controllers", 00:06:57.622 "nvmf_get_stats", 00:06:57.622 "nvmf_get_transports", 00:06:57.622 "nvmf_create_transport", 00:06:57.622 "nvmf_get_targets", 00:06:57.622 "nvmf_delete_target", 00:06:57.622 "nvmf_create_target", 00:06:57.622 "nvmf_subsystem_allow_any_host", 00:06:57.622 "nvmf_subsystem_remove_host", 00:06:57.622 "nvmf_subsystem_add_host", 00:06:57.622 "nvmf_ns_remove_host", 00:06:57.622 "nvmf_ns_add_host", 00:06:57.622 "nvmf_subsystem_remove_ns", 00:06:57.622 "nvmf_subsystem_add_ns", 00:06:57.622 "nvmf_subsystem_listener_set_ana_state", 00:06:57.622 "nvmf_discovery_get_referrals", 00:06:57.622 "nvmf_discovery_remove_referral", 00:06:57.622 "nvmf_discovery_add_referral", 00:06:57.622 "nvmf_subsystem_remove_listener", 00:06:57.622 "nvmf_subsystem_add_listener", 00:06:57.622 "nvmf_delete_subsystem", 00:06:57.622 "nvmf_create_subsystem", 00:06:57.622 "nvmf_get_subsystems", 00:06:57.622 "env_dpdk_get_mem_stats", 00:06:57.622 "nbd_get_disks", 00:06:57.622 "nbd_stop_disk", 00:06:57.622 "nbd_start_disk", 00:06:57.622 "ublk_recover_disk", 00:06:57.622 "ublk_get_disks", 00:06:57.622 "ublk_stop_disk", 00:06:57.622 "ublk_start_disk", 00:06:57.622 "ublk_destroy_target", 00:06:57.622 "ublk_create_target", 00:06:57.622 "virtio_blk_create_transport", 00:06:57.622 "virtio_blk_get_transports", 00:06:57.622 "vhost_controller_set_coalescing", 00:06:57.622 "vhost_get_controllers", 00:06:57.622 "vhost_delete_controller", 00:06:57.622 "vhost_create_blk_controller", 00:06:57.622 "vhost_scsi_controller_remove_target", 00:06:57.622 "vhost_scsi_controller_add_target", 00:06:57.622 "vhost_start_scsi_controller", 00:06:57.622 "vhost_create_scsi_controller", 00:06:57.622 "thread_set_cpumask", 00:06:57.622 "framework_get_governor", 00:06:57.622 "framework_get_scheduler", 00:06:57.622 "framework_set_scheduler", 00:06:57.622 "framework_get_reactors", 00:06:57.622 "thread_get_io_channels", 00:06:57.622 "thread_get_pollers", 00:06:57.622 "thread_get_stats", 00:06:57.622 "framework_monitor_context_switch", 00:06:57.622 "spdk_kill_instance", 00:06:57.622 "log_enable_timestamps", 00:06:57.622 "log_get_flags", 00:06:57.622 "log_clear_flag", 00:06:57.622 "log_set_flag", 00:06:57.622 "log_get_level", 00:06:57.622 "log_set_level", 00:06:57.622 "log_get_print_level", 00:06:57.622 "log_set_print_level", 00:06:57.622 "framework_enable_cpumask_locks", 00:06:57.622 "framework_disable_cpumask_locks", 00:06:57.622 "framework_wait_init", 00:06:57.622 "framework_start_init", 00:06:57.622 "scsi_get_devices", 00:06:57.622 "bdev_get_histogram", 00:06:57.622 "bdev_enable_histogram", 00:06:57.622 "bdev_set_qos_limit", 00:06:57.622 "bdev_set_qd_sampling_period", 00:06:57.622 "bdev_get_bdevs", 00:06:57.622 "bdev_reset_iostat", 00:06:57.622 "bdev_get_iostat", 00:06:57.622 "bdev_examine", 00:06:57.622 "bdev_wait_for_examine", 00:06:57.622 "bdev_set_options", 00:06:57.622 "notify_get_notifications", 00:06:57.622 "notify_get_types", 00:06:57.622 "accel_get_stats", 00:06:57.622 "accel_set_options", 00:06:57.622 "accel_set_driver", 00:06:57.622 "accel_crypto_key_destroy", 00:06:57.622 "accel_crypto_keys_get", 00:06:57.622 "accel_crypto_key_create", 00:06:57.622 "accel_assign_opc", 00:06:57.622 "accel_get_module_info", 00:06:57.622 "accel_get_opc_assignments", 00:06:57.622 "vmd_rescan", 00:06:57.622 "vmd_remove_device", 00:06:57.622 "vmd_enable", 00:06:57.622 "sock_get_default_impl", 00:06:57.622 "sock_set_default_impl", 00:06:57.622 "sock_impl_set_options", 00:06:57.622 "sock_impl_get_options", 00:06:57.622 "iobuf_get_stats", 00:06:57.622 "iobuf_set_options", 00:06:57.622 "framework_get_pci_devices", 00:06:57.622 "framework_get_config", 00:06:57.622 "framework_get_subsystems", 00:06:57.622 "trace_get_info", 00:06:57.622 "trace_get_tpoint_group_mask", 00:06:57.622 "trace_disable_tpoint_group", 00:06:57.622 "trace_enable_tpoint_group", 00:06:57.622 "trace_clear_tpoint_mask", 00:06:57.622 "trace_set_tpoint_mask", 00:06:57.622 "keyring_get_keys", 00:06:57.622 "spdk_get_version", 00:06:57.622 "rpc_get_methods" 00:06:57.622 ] 00:06:57.622 09:12:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:57.622 09:12:43 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:57.622 09:12:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.880 09:12:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:57.880 09:12:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63225 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63225 ']' 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63225 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63225 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.880 killing process with pid 63225 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63225' 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63225 00:06:57.880 09:12:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63225 00:06:59.780 00:06:59.780 real 0m3.695s 00:06:59.780 user 0m6.627s 00:06:59.780 sys 0m0.492s 00:06:59.780 09:12:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.780 ************************************ 00:06:59.780 END TEST spdkcli_tcp 00:06:59.780 ************************************ 00:06:59.780 09:12:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.038 09:12:46 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.038 09:12:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:00.038 09:12:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.038 09:12:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.038 09:12:46 -- common/autotest_common.sh@10 -- # set +x 00:07:00.038 ************************************ 00:07:00.038 START TEST dpdk_mem_utility 00:07:00.038 ************************************ 00:07:00.038 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:00.038 * Looking for test storage... 00:07:00.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:00.038 09:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:00.038 09:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63342 00:07:00.038 09:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63342 00:07:00.038 09:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.038 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63342 ']' 00:07:00.038 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.038 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.038 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.039 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.039 09:12:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:00.039 [2024-07-12 09:12:46.383938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:00.039 [2024-07-12 09:12:46.384116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63342 ] 00:07:00.296 [2024-07-12 09:12:46.557752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.555 [2024-07-12 09:12:46.759264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.122 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.123 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:01.123 09:12:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:01.123 09:12:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:01.123 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.123 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.123 { 00:07:01.123 "filename": "/tmp/spdk_mem_dump.txt" 00:07:01.123 } 00:07:01.123 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.123 09:12:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:01.382 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:01.382 1 heaps totaling size 820.000000 MiB 00:07:01.382 size: 820.000000 MiB heap id: 0 00:07:01.382 end heaps---------- 00:07:01.382 8 mempools totaling size 598.116089 MiB 00:07:01.382 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:01.382 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:01.382 size: 84.521057 MiB name: bdev_io_63342 00:07:01.382 size: 51.011292 MiB name: evtpool_63342 00:07:01.382 size: 50.003479 MiB name: msgpool_63342 00:07:01.382 size: 21.763794 MiB name: PDU_Pool 00:07:01.382 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:01.382 size: 0.026123 MiB name: Session_Pool 00:07:01.382 end mempools------- 00:07:01.382 6 memzones totaling size 4.142822 MiB 00:07:01.382 size: 1.000366 MiB name: RG_ring_0_63342 00:07:01.382 size: 1.000366 MiB name: RG_ring_1_63342 00:07:01.382 size: 1.000366 MiB name: RG_ring_4_63342 00:07:01.382 size: 1.000366 MiB name: RG_ring_5_63342 00:07:01.382 size: 0.125366 MiB name: RG_ring_2_63342 00:07:01.382 size: 0.015991 MiB name: RG_ring_3_63342 00:07:01.382 end memzones------- 00:07:01.382 09:12:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:01.382 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:07:01.382 list of free elements. size: 18.452271 MiB 00:07:01.382 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:01.382 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:01.382 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:01.382 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:01.382 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:01.382 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:01.382 element at address: 0x200019600000 with size: 0.999084 MiB 00:07:01.382 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:01.382 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:01.382 element at address: 0x200018e00000 with size: 0.959656 MiB 00:07:01.382 element at address: 0x200019900040 with size: 0.936401 MiB 00:07:01.382 element at address: 0x200000200000 with size: 0.830200 MiB 00:07:01.382 element at address: 0x20001b000000 with size: 0.564880 MiB 00:07:01.382 element at address: 0x200019200000 with size: 0.487976 MiB 00:07:01.382 element at address: 0x200019a00000 with size: 0.485413 MiB 00:07:01.382 element at address: 0x200013800000 with size: 0.467651 MiB 00:07:01.382 element at address: 0x200028400000 with size: 0.390442 MiB 00:07:01.382 element at address: 0x200003a00000 with size: 0.351990 MiB 00:07:01.382 list of standard malloc elements. size: 199.283325 MiB 00:07:01.382 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:01.382 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:01.382 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:01.382 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:01.382 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:01.382 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:01.382 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:07:01.382 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:01.382 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:01.382 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:07:01.382 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:01.382 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:07:01.382 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013877b80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013877c80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x200019abc680 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:07:01.383 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:07:01.384 element at address: 0x200028463f40 with size: 0.000244 MiB 00:07:01.384 element at address: 0x200028464040 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846af80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b080 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b180 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b280 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b380 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b480 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b580 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b680 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b780 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b880 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846b980 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846be80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c080 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c180 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c280 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c380 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c480 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c580 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c680 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c780 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c880 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846c980 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d080 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d180 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d280 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d380 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d480 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:01.384 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:01.384 list of memzone associated elements. size: 602.264404 MiB 00:07:01.384 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:01.384 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:01.384 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:01.384 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:01.384 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:01.384 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63342_0 00:07:01.384 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:01.384 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63342_0 00:07:01.384 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:01.384 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63342_0 00:07:01.384 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:01.384 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:01.384 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:01.384 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:01.384 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:01.384 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63342 00:07:01.384 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:01.384 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63342 00:07:01.384 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:01.384 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63342 00:07:01.384 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:01.384 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:01.384 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:01.384 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:01.384 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:01.384 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:01.384 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:01.384 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:01.384 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:01.384 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63342 00:07:01.384 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:01.384 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63342 00:07:01.384 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:01.384 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63342 00:07:01.384 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:01.384 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63342 00:07:01.384 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:01.384 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63342 00:07:01.384 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:07:01.384 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:01.384 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:01.384 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:01.384 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:07:01.384 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:01.384 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:01.384 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63342 00:07:01.384 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:07:01.384 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:01.384 element at address: 0x200028464140 with size: 0.023804 MiB 00:07:01.384 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:01.384 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:01.384 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63342 00:07:01.384 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:07:01.384 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:01.384 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:07:01.384 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63342 00:07:01.384 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:01.384 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63342 00:07:01.384 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:07:01.384 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:01.384 09:12:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:01.384 09:12:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63342 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63342 ']' 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63342 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63342 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.385 killing process with pid 63342 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63342' 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63342 00:07:01.385 09:12:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63342 00:07:03.915 00:07:03.915 real 0m3.602s 00:07:03.915 user 0m3.774s 00:07:03.915 sys 0m0.472s 00:07:03.915 09:12:49 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.915 09:12:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.915 ************************************ 00:07:03.915 END TEST dpdk_mem_utility 00:07:03.915 ************************************ 00:07:03.915 09:12:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:03.915 09:12:49 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:03.915 09:12:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.915 09:12:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.915 09:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:03.915 ************************************ 00:07:03.915 START TEST event 00:07:03.915 ************************************ 00:07:03.915 09:12:49 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:03.915 * Looking for test storage... 00:07:03.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:03.915 09:12:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.915 09:12:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.915 09:12:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:03.915 09:12:49 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:03.915 09:12:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.915 09:12:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.915 ************************************ 00:07:03.915 START TEST event_perf 00:07:03.915 ************************************ 00:07:03.915 09:12:49 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:03.915 Running I/O for 1 seconds...[2024-07-12 09:12:49.929208] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:03.915 [2024-07-12 09:12:49.929386] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63437 ] 00:07:03.915 [2024-07-12 09:12:50.103173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.173 [2024-07-12 09:12:50.354443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.173 [2024-07-12 09:12:50.354547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.173 [2024-07-12 09:12:50.354680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.173 [2024-07-12 09:12:50.354700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.551 Running I/O for 1 seconds... 00:07:05.551 lcore 0: 191847 00:07:05.551 lcore 1: 191843 00:07:05.551 lcore 2: 191844 00:07:05.551 lcore 3: 191846 00:07:05.551 done. 00:07:05.551 00:07:05.551 real 0m1.880s 00:07:05.551 user 0m4.634s 00:07:05.551 sys 0m0.121s 00:07:05.551 09:12:51 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.551 09:12:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.551 ************************************ 00:07:05.551 END TEST event_perf 00:07:05.551 ************************************ 00:07:05.551 09:12:51 event -- common/autotest_common.sh@1142 -- # return 0 00:07:05.551 09:12:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:05.551 09:12:51 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:05.551 09:12:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.551 09:12:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.551 ************************************ 00:07:05.551 START TEST event_reactor 00:07:05.551 ************************************ 00:07:05.551 09:12:51 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:05.551 [2024-07-12 09:12:51.853976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.551 [2024-07-12 09:12:51.854150] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63482 ] 00:07:05.810 [2024-07-12 09:12:52.013108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.068 [2024-07-12 09:12:52.199838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.442 test_start 00:07:07.442 oneshot 00:07:07.442 tick 100 00:07:07.442 tick 100 00:07:07.442 tick 250 00:07:07.442 tick 100 00:07:07.442 tick 100 00:07:07.442 tick 100 00:07:07.442 tick 250 00:07:07.442 tick 500 00:07:07.442 tick 100 00:07:07.442 tick 100 00:07:07.442 tick 250 00:07:07.442 tick 100 00:07:07.442 tick 100 00:07:07.442 test_end 00:07:07.442 00:07:07.442 real 0m1.773s 00:07:07.442 user 0m1.573s 00:07:07.442 sys 0m0.090s 00:07:07.442 09:12:53 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.442 09:12:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:07.442 ************************************ 00:07:07.442 END TEST event_reactor 00:07:07.442 ************************************ 00:07:07.442 09:12:53 event -- common/autotest_common.sh@1142 -- # return 0 00:07:07.442 09:12:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:07.442 09:12:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.442 09:12:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.442 09:12:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.442 ************************************ 00:07:07.442 START TEST event_reactor_perf 00:07:07.442 ************************************ 00:07:07.442 09:12:53 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:07.442 [2024-07-12 09:12:53.700745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.442 [2024-07-12 09:12:53.701042] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:07:07.700 [2024-07-12 09:12:53.890736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.957 [2024-07-12 09:12:54.077756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.328 test_start 00:07:09.328 test_end 00:07:09.328 Performance: 266313 events per second 00:07:09.328 00:07:09.328 real 0m1.821s 00:07:09.328 user 0m1.598s 00:07:09.328 sys 0m0.111s 00:07:09.328 09:12:55 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.328 09:12:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.328 ************************************ 00:07:09.329 END TEST event_reactor_perf 00:07:09.329 ************************************ 00:07:09.329 09:12:55 event -- common/autotest_common.sh@1142 -- # return 0 00:07:09.329 09:12:55 event -- event/event.sh@49 -- # uname -s 00:07:09.329 09:12:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:09.329 09:12:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:09.329 09:12:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.329 09:12:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.329 09:12:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.329 ************************************ 00:07:09.329 START TEST event_scheduler 00:07:09.329 ************************************ 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:09.329 * Looking for test storage... 00:07:09.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:09.329 09:12:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:09.329 09:12:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63586 00:07:09.329 09:12:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.329 09:12:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63586 00:07:09.329 09:12:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63586 ']' 00:07:09.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.329 09:12:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 [2024-07-12 09:12:55.712443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:09.587 [2024-07-12 09:12:55.712618] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:07:09.587 [2024-07-12 09:12:55.882828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.844 [2024-07-12 09:12:56.072916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.844 [2024-07-12 09:12:56.073074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.844 [2024-07-12 09:12:56.073175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.844 [2024-07-12 09:12:56.073233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:10.408 09:12:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.408 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.408 POWER: Cannot set governor of lcore 0 to userspace 00:07:10.408 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.408 POWER: Cannot set governor of lcore 0 to performance 00:07:10.408 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.408 POWER: Cannot set governor of lcore 0 to userspace 00:07:10.408 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:10.408 POWER: Cannot set governor of lcore 0 to userspace 00:07:10.408 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:10.408 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:10.408 POWER: Unable to set Power Management Environment for lcore 0 00:07:10.408 [2024-07-12 09:12:56.590764] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:10.408 [2024-07-12 09:12:56.590786] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:10.408 [2024-07-12 09:12:56.590802] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:10.408 [2024-07-12 09:12:56.590824] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:10.408 [2024-07-12 09:12:56.590840] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:10.408 [2024-07-12 09:12:56.590851] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.408 09:12:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.408 09:12:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 [2024-07-12 09:12:56.867820] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:10.666 09:12:56 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:10.666 09:12:56 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.666 09:12:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 ************************************ 00:07:10.666 START TEST scheduler_create_thread 00:07:10.666 ************************************ 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 2 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 3 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 4 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 5 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 6 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 7 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 8 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 9 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 10 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.666 09:12:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.043 09:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.043 09:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:12.043 09:12:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:12.043 09:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.043 09:12:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.975 09:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.975 00:07:12.975 real 0m2.137s 00:07:12.975 user 0m0.018s 00:07:12.975 sys 0m0.007s 00:07:12.975 ************************************ 00:07:12.975 END TEST scheduler_create_thread 00:07:12.975 ************************************ 00:07:12.975 09:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.975 09:12:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:12.975 09:12:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:12.975 09:12:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63586 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63586 ']' 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63586 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63586 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:12.975 killing process with pid 63586 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63586' 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63586 00:07:12.975 09:12:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63586 00:07:13.233 [2024-07-12 09:12:59.496924] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:14.604 00:07:14.604 real 0m5.110s 00:07:14.604 user 0m8.408s 00:07:14.604 sys 0m0.411s 00:07:14.604 ************************************ 00:07:14.604 END TEST event_scheduler 00:07:14.604 ************************************ 00:07:14.604 09:13:00 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.604 09:13:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.604 09:13:00 event -- common/autotest_common.sh@1142 -- # return 0 00:07:14.604 09:13:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:14.604 09:13:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:14.604 09:13:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.604 09:13:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.604 09:13:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.604 ************************************ 00:07:14.604 START TEST app_repeat 00:07:14.604 ************************************ 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63687 00:07:14.604 Process app_repeat pid: 63687 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63687' 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:14.604 spdk_app_start Round 0 00:07:14.604 09:13:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63687 /var/tmp/spdk-nbd.sock 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63687 ']' 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.604 09:13:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.604 [2024-07-12 09:13:00.738242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.604 [2024-07-12 09:13:00.738401] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63687 ] 00:07:14.604 [2024-07-12 09:13:00.902361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.862 [2024-07-12 09:13:01.089614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.862 [2024-07-12 09:13:01.089622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.435 09:13:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.436 09:13:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:15.436 09:13:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.693 Malloc0 00:07:15.693 09:13:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.260 Malloc1 00:07:16.260 09:13:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:16.260 /dev/nbd0 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.260 1+0 records in 00:07:16.260 1+0 records out 00:07:16.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394463 s, 10.4 MB/s 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:16.260 09:13:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.260 09:13:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.826 /dev/nbd1 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.826 1+0 records in 00:07:16.826 1+0 records out 00:07:16.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281841 s, 14.5 MB/s 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:16.826 09:13:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.826 09:13:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.826 09:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.826 { 00:07:16.826 "nbd_device": "/dev/nbd0", 00:07:16.826 "bdev_name": "Malloc0" 00:07:16.826 }, 00:07:16.826 { 00:07:16.826 "nbd_device": "/dev/nbd1", 00:07:16.826 "bdev_name": "Malloc1" 00:07:16.826 } 00:07:16.826 ]' 00:07:16.826 09:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.826 { 00:07:16.826 "nbd_device": "/dev/nbd0", 00:07:16.826 "bdev_name": "Malloc0" 00:07:16.827 }, 00:07:16.827 { 00:07:16.827 "nbd_device": "/dev/nbd1", 00:07:16.827 "bdev_name": "Malloc1" 00:07:16.827 } 00:07:16.827 ]' 00:07:16.827 09:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.084 /dev/nbd1' 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.084 /dev/nbd1' 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:17.084 256+0 records in 00:07:17.084 256+0 records out 00:07:17.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709395 s, 148 MB/s 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.084 256+0 records in 00:07:17.084 256+0 records out 00:07:17.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027805 s, 37.7 MB/s 00:07:17.084 09:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.085 256+0 records in 00:07:17.085 256+0 records out 00:07:17.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322362 s, 32.5 MB/s 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.085 09:13:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.343 09:13:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.601 09:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.859 09:13:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.859 09:13:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:18.426 09:13:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.800 [2024-07-12 09:13:05.711280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.800 [2024-07-12 09:13:05.886054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.800 [2024-07-12 09:13:05.886059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.800 [2024-07-12 09:13:06.053457] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.800 [2024-07-12 09:13:06.053534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:21.699 spdk_app_start Round 1 00:07:21.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.699 09:13:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:21.699 09:13:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:21.699 09:13:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63687 /var/tmp/spdk-nbd.sock 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63687 ']' 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.699 09:13:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:21.699 09:13:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.958 Malloc0 00:07:21.958 09:13:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:22.217 Malloc1 00:07:22.217 09:13:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.217 09:13:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:22.475 /dev/nbd0 00:07:22.475 09:13:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.475 09:13:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.475 1+0 records in 00:07:22.475 1+0 records out 00:07:22.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321964 s, 12.7 MB/s 00:07:22.475 09:13:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.735 09:13:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:22.735 09:13:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.735 09:13:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:22.735 09:13:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:22.735 09:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.735 09:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.735 09:13:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.993 /dev/nbd1 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.993 1+0 records in 00:07:22.993 1+0 records out 00:07:22.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619557 s, 6.6 MB/s 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:22.993 09:13:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.993 09:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.251 { 00:07:23.251 "nbd_device": "/dev/nbd0", 00:07:23.251 "bdev_name": "Malloc0" 00:07:23.251 }, 00:07:23.251 { 00:07:23.251 "nbd_device": "/dev/nbd1", 00:07:23.251 "bdev_name": "Malloc1" 00:07:23.251 } 00:07:23.251 ]' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.251 { 00:07:23.251 "nbd_device": "/dev/nbd0", 00:07:23.251 "bdev_name": "Malloc0" 00:07:23.251 }, 00:07:23.251 { 00:07:23.251 "nbd_device": "/dev/nbd1", 00:07:23.251 "bdev_name": "Malloc1" 00:07:23.251 } 00:07:23.251 ]' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:23.251 /dev/nbd1' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:23.251 /dev/nbd1' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:23.251 256+0 records in 00:07:23.251 256+0 records out 00:07:23.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715841 s, 146 MB/s 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:23.251 256+0 records in 00:07:23.251 256+0 records out 00:07:23.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02674 s, 39.2 MB/s 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:23.251 256+0 records in 00:07:23.251 256+0 records out 00:07:23.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298089 s, 35.2 MB/s 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.251 09:13:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.508 09:13:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.508 09:13:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.508 09:13:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.508 09:13:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.508 09:13:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.508 09:13:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.509 09:13:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.509 09:13:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.509 09:13:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.509 09:13:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.766 09:13:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.024 09:13:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.024 09:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.024 09:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:24.282 09:13:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:24.282 09:13:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.539 09:13:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:25.914 [2024-07-12 09:13:11.918631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.914 [2024-07-12 09:13:12.088099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.914 [2024-07-12 09:13:12.088105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.914 [2024-07-12 09:13:12.255053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:25.914 [2024-07-12 09:13:12.255173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:27.813 spdk_app_start Round 2 00:07:27.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.813 09:13:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:27.813 09:13:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:27.813 09:13:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63687 /var/tmp/spdk-nbd.sock 00:07:27.813 09:13:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63687 ']' 00:07:27.813 09:13:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.813 09:13:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.813 09:13:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.813 09:13:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.813 09:13:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.813 09:13:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.813 09:13:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:27.813 09:13:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:28.071 Malloc0 00:07:28.071 09:13:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:28.638 Malloc1 00:07:28.638 09:13:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.638 09:13:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:28.638 /dev/nbd0 00:07:28.896 09:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.896 09:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.896 09:13:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:28.896 09:13:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:28.896 09:13:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:28.896 09:13:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:28.896 09:13:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:28.896 09:13:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:28.897 1+0 records in 00:07:28.897 1+0 records out 00:07:28.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660344 s, 6.2 MB/s 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:28.897 09:13:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:28.897 09:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.897 09:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.897 09:13:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:29.155 /dev/nbd1 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.155 1+0 records in 00:07:29.155 1+0 records out 00:07:29.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330866 s, 12.4 MB/s 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:29.155 09:13:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.155 09:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:29.413 { 00:07:29.413 "nbd_device": "/dev/nbd0", 00:07:29.413 "bdev_name": "Malloc0" 00:07:29.413 }, 00:07:29.413 { 00:07:29.413 "nbd_device": "/dev/nbd1", 00:07:29.413 "bdev_name": "Malloc1" 00:07:29.413 } 00:07:29.413 ]' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:29.413 { 00:07:29.413 "nbd_device": "/dev/nbd0", 00:07:29.413 "bdev_name": "Malloc0" 00:07:29.413 }, 00:07:29.413 { 00:07:29.413 "nbd_device": "/dev/nbd1", 00:07:29.413 "bdev_name": "Malloc1" 00:07:29.413 } 00:07:29.413 ]' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:29.413 /dev/nbd1' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:29.413 /dev/nbd1' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:29.413 256+0 records in 00:07:29.413 256+0 records out 00:07:29.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077643 s, 135 MB/s 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:29.413 256+0 records in 00:07:29.413 256+0 records out 00:07:29.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316395 s, 33.1 MB/s 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:29.413 256+0 records in 00:07:29.413 256+0 records out 00:07:29.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303129 s, 34.6 MB/s 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.413 09:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.671 09:13:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.928 09:13:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.279 09:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:30.552 09:13:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:30.552 09:13:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:30.810 09:13:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.183 [2024-07-12 09:13:18.240872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.183 [2024-07-12 09:13:18.426037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.183 [2024-07-12 09:13:18.426048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.441 [2024-07-12 09:13:18.597564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:32.441 [2024-07-12 09:13:18.597690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:33.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.814 09:13:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63687 /var/tmp/spdk-nbd.sock 00:07:33.814 09:13:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63687 ']' 00:07:33.814 09:13:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.814 09:13:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.814 09:13:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.814 09:13:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.814 09:13:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:34.071 09:13:20 event.app_repeat -- event/event.sh@39 -- # killprocess 63687 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63687 ']' 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63687 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63687 00:07:34.071 killing process with pid 63687 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63687' 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63687 00:07:34.071 09:13:20 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63687 00:07:35.445 spdk_app_start is called in Round 0. 00:07:35.445 Shutdown signal received, stop current app iteration 00:07:35.445 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:35.445 spdk_app_start is called in Round 1. 00:07:35.445 Shutdown signal received, stop current app iteration 00:07:35.445 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:35.445 spdk_app_start is called in Round 2. 00:07:35.445 Shutdown signal received, stop current app iteration 00:07:35.445 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:35.445 spdk_app_start is called in Round 3. 00:07:35.445 Shutdown signal received, stop current app iteration 00:07:35.445 09:13:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:35.445 09:13:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:35.445 00:07:35.445 real 0m20.770s 00:07:35.445 user 0m44.977s 00:07:35.445 sys 0m2.608s 00:07:35.445 09:13:21 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.445 ************************************ 00:07:35.445 END TEST app_repeat 00:07:35.445 ************************************ 00:07:35.445 09:13:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 09:13:21 event -- common/autotest_common.sh@1142 -- # return 0 00:07:35.445 09:13:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:35.445 09:13:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.445 09:13:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.445 09:13:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.445 09:13:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 ************************************ 00:07:35.445 START TEST cpu_locks 00:07:35.445 ************************************ 00:07:35.445 09:13:21 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.445 * Looking for test storage... 00:07:35.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:35.445 09:13:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:35.445 09:13:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:35.445 09:13:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:35.445 09:13:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:35.445 09:13:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.445 09:13:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.446 09:13:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 ************************************ 00:07:35.446 START TEST default_locks 00:07:35.446 ************************************ 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64143 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64143 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64143 ']' 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.446 09:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.446 [2024-07-12 09:13:21.701018] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.446 [2024-07-12 09:13:21.701181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:07:35.704 [2024-07-12 09:13:21.904279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.962 [2024-07-12 09:13:22.104913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.527 09:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.527 09:13:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:36.527 09:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64143 00:07:36.527 09:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64143 00:07:36.527 09:13:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64143 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64143 ']' 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64143 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64143 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.095 killing process with pid 64143 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64143' 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64143 00:07:37.095 09:13:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64143 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64143 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64143 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64143 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64143 ']' 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.620 ERROR: process (pid: 64143) is no longer running 00:07:39.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64143) - No such process 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.620 00:07:39.620 real 0m3.833s 00:07:39.620 user 0m3.954s 00:07:39.620 sys 0m0.586s 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.620 ************************************ 00:07:39.620 END TEST default_locks 00:07:39.620 ************************************ 00:07:39.620 09:13:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.620 09:13:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:39.620 09:13:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:39.620 09:13:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.620 09:13:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.620 09:13:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.620 ************************************ 00:07:39.620 START TEST default_locks_via_rpc 00:07:39.620 ************************************ 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64218 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64218 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64218 ']' 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.620 09:13:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.620 [2024-07-12 09:13:25.603299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.620 [2024-07-12 09:13:25.604045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64218 ] 00:07:39.620 [2024-07-12 09:13:25.781320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.882 [2024-07-12 09:13:26.059764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64218 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.452 09:13:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64218 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64218 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64218 ']' 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64218 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64218 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64218' 00:07:41.018 killing process with pid 64218 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64218 00:07:41.018 09:13:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64218 00:07:43.545 00:07:43.545 real 0m3.882s 00:07:43.545 user 0m4.057s 00:07:43.545 sys 0m0.609s 00:07:43.545 ************************************ 00:07:43.545 END TEST default_locks_via_rpc 00:07:43.545 ************************************ 00:07:43.545 09:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.545 09:13:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.545 09:13:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:43.545 09:13:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:43.545 09:13:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.545 09:13:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.545 09:13:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.545 ************************************ 00:07:43.545 START TEST non_locking_app_on_locked_coremask 00:07:43.545 ************************************ 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64286 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64286 /var/tmp/spdk.sock 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64286 ']' 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.545 09:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.545 [2024-07-12 09:13:29.511390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.545 [2024-07-12 09:13:29.511551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64286 ] 00:07:43.545 [2024-07-12 09:13:29.675596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.545 [2024-07-12 09:13:29.858593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64308 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64308 /var/tmp/spdk2.sock 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64308 ']' 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.479 09:13:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.479 [2024-07-12 09:13:30.718453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.479 [2024-07-12 09:13:30.718604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64308 ] 00:07:44.737 [2024-07-12 09:13:30.899077] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:44.737 [2024-07-12 09:13:30.899161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.303 [2024-07-12 09:13:31.365044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.272 09:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.272 09:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:47.272 09:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64286 00:07:47.272 09:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64286 00:07:47.272 09:13:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64286 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64286 ']' 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64286 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64286 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.842 killing process with pid 64286 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64286' 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64286 00:07:47.842 09:13:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64286 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64308 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64308 ']' 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64308 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64308 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.026 killing process with pid 64308 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64308' 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64308 00:07:52.026 09:13:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64308 00:07:54.557 00:07:54.557 real 0m11.020s 00:07:54.557 user 0m11.753s 00:07:54.557 sys 0m1.118s 00:07:54.557 ************************************ 00:07:54.557 END TEST non_locking_app_on_locked_coremask 00:07:54.557 ************************************ 00:07:54.557 09:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.557 09:13:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.557 09:13:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:54.557 09:13:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:54.557 09:13:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.557 09:13:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.557 09:13:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.557 ************************************ 00:07:54.557 START TEST locking_app_on_unlocked_coremask 00:07:54.557 ************************************ 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64445 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64445 /var/tmp/spdk.sock 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64445 ']' 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.557 09:13:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.557 [2024-07-12 09:13:40.578478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.557 [2024-07-12 09:13:40.578870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64445 ] 00:07:54.557 [2024-07-12 09:13:40.742808] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.557 [2024-07-12 09:13:40.742867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.816 [2024-07-12 09:13:40.928456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64461 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64461 /var/tmp/spdk2.sock 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64461 ']' 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.382 09:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.382 [2024-07-12 09:13:41.718834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:55.382 [2024-07-12 09:13:41.719202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64461 ] 00:07:55.640 [2024-07-12 09:13:41.895987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.207 [2024-07-12 09:13:42.267862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.107 09:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.107 09:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:58.107 09:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64461 00:07:58.107 09:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64461 00:07:58.107 09:13:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64445 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64445 ']' 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64445 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64445 00:07:59.040 killing process with pid 64445 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64445' 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64445 00:07:59.040 09:13:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64445 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64461 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64461 ']' 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64461 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64461 00:08:03.254 killing process with pid 64461 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64461' 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64461 00:08:03.254 09:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64461 00:08:05.783 ************************************ 00:08:05.783 END TEST locking_app_on_unlocked_coremask 00:08:05.783 ************************************ 00:08:05.783 00:08:05.783 real 0m11.095s 00:08:05.783 user 0m11.880s 00:08:05.783 sys 0m1.176s 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.783 09:13:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:05.783 09:13:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:05.783 09:13:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.783 09:13:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.783 09:13:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.783 ************************************ 00:08:05.783 START TEST locking_app_on_locked_coremask 00:08:05.783 ************************************ 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64609 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64609 /var/tmp/spdk.sock 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64609 ']' 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.783 09:13:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.783 [2024-07-12 09:13:51.740298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.783 [2024-07-12 09:13:51.740455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64609 ] 00:08:05.783 [2024-07-12 09:13:51.903891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.783 [2024-07-12 09:13:52.094018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64625 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64625 /var/tmp/spdk2.sock 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64625 /var/tmp/spdk2.sock 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64625 /var/tmp/spdk2.sock 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64625 ']' 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:06.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.718 09:13:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.718 [2024-07-12 09:13:52.912433] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:06.718 [2024-07-12 09:13:52.912907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64625 ] 00:08:06.976 [2024-07-12 09:13:53.096340] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64609 has claimed it. 00:08:06.976 [2024-07-12 09:13:53.096491] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:07.234 ERROR: process (pid: 64625) is no longer running 00:08:07.234 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64625) - No such process 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64609 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64609 00:08:07.234 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:07.801 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64609 00:08:07.802 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64609 ']' 00:08:07.802 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64609 00:08:07.802 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:07.802 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.802 09:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64609 00:08:07.802 killing process with pid 64609 00:08:07.802 09:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.802 09:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.802 09:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64609' 00:08:07.802 09:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64609 00:08:07.802 09:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64609 00:08:10.330 00:08:10.330 real 0m4.454s 00:08:10.330 user 0m4.858s 00:08:10.330 sys 0m0.715s 00:08:10.330 09:13:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.330 ************************************ 00:08:10.330 END TEST locking_app_on_locked_coremask 00:08:10.330 ************************************ 00:08:10.330 09:13:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.330 09:13:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:10.330 09:13:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:10.330 09:13:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.330 09:13:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.330 09:13:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.330 ************************************ 00:08:10.330 START TEST locking_overlapped_coremask 00:08:10.330 ************************************ 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64695 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64695 /var/tmp/spdk.sock 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64695 ']' 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.330 09:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.330 [2024-07-12 09:13:56.251277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:10.330 [2024-07-12 09:13:56.251452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64695 ] 00:08:10.330 [2024-07-12 09:13:56.424552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.330 [2024-07-12 09:13:56.618535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.330 [2024-07-12 09:13:56.618661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.330 [2024-07-12 09:13:56.618671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64713 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64713 /var/tmp/spdk2.sock 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64713 /var/tmp/spdk2.sock 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64713 /var/tmp/spdk2.sock 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64713 ']' 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.264 09:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.264 [2024-07-12 09:13:57.439448] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.264 [2024-07-12 09:13:57.439620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64713 ] 00:08:11.522 [2024-07-12 09:13:57.620022] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64695 has claimed it. 00:08:11.522 [2024-07-12 09:13:57.620099] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.780 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64713) - No such process 00:08:11.780 ERROR: process (pid: 64713) is no longer running 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64695 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64695 ']' 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64695 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.780 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64695 00:08:12.089 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.089 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.089 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64695' 00:08:12.089 killing process with pid 64695 00:08:12.089 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64695 00:08:12.089 09:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64695 00:08:14.003 ************************************ 00:08:14.003 END TEST locking_overlapped_coremask 00:08:14.003 ************************************ 00:08:14.003 00:08:14.003 real 0m4.124s 00:08:14.003 user 0m10.797s 00:08:14.003 sys 0m0.530s 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.003 09:14:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:14.003 09:14:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:14.003 09:14:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:14.003 09:14:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.003 09:14:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.003 ************************************ 00:08:14.003 START TEST locking_overlapped_coremask_via_rpc 00:08:14.003 ************************************ 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:08:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64777 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64777 /var/tmp/spdk.sock 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64777 ']' 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.003 09:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.261 [2024-07-12 09:14:00.412577] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:14.261 [2024-07-12 09:14:00.412970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64777 ] 00:08:14.261 [2024-07-12 09:14:00.579129] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.261 [2024-07-12 09:14:00.579451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.520 [2024-07-12 09:14:00.797173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.520 [2024-07-12 09:14:00.797329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.520 [2024-07-12 09:14:00.797400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64795 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64795 /var/tmp/spdk2.sock 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64795 ']' 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.454 09:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.454 [2024-07-12 09:14:01.641203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:15.454 [2024-07-12 09:14:01.641594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64795 ] 00:08:15.711 [2024-07-12 09:14:01.818619] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:15.711 [2024-07-12 09:14:01.818718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.968 [2024-07-12 09:14:02.197494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.968 [2024-07-12 09:14:02.201340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.968 [2024-07-12 09:14:02.201360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.343 [2024-07-12 09:14:03.603447] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64777 has claimed it. 00:08:17.343 request: 00:08:17.343 { 00:08:17.343 "method": "framework_enable_cpumask_locks", 00:08:17.343 "req_id": 1 00:08:17.343 } 00:08:17.343 Got JSON-RPC error response 00:08:17.343 response: 00:08:17.343 { 00:08:17.343 "code": -32603, 00:08:17.343 "message": "Failed to claim CPU core: 2" 00:08:17.343 } 00:08:17.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64777 /var/tmp/spdk.sock 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64777 ']' 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.343 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64795 /var/tmp/spdk2.sock 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64795 ']' 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.601 09:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:18.171 00:08:18.171 real 0m3.914s 00:08:18.171 user 0m1.522s 00:08:18.171 sys 0m0.192s 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.171 09:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.171 ************************************ 00:08:18.171 END TEST locking_overlapped_coremask_via_rpc 00:08:18.171 ************************************ 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:18.171 09:14:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:18.171 09:14:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64777 ]] 00:08:18.171 09:14:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64777 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64777 ']' 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64777 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64777 00:08:18.171 killing process with pid 64777 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64777' 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64777 00:08:18.171 09:14:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64777 00:08:20.070 09:14:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64795 ]] 00:08:20.070 09:14:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64795 00:08:20.070 09:14:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64795 ']' 00:08:20.070 09:14:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64795 00:08:20.070 09:14:06 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:20.070 09:14:06 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.070 09:14:06 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64795 00:08:20.328 killing process with pid 64795 00:08:20.328 09:14:06 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:20.328 09:14:06 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:20.328 09:14:06 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64795' 00:08:20.328 09:14:06 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64795 00:08:20.328 09:14:06 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64795 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64777 ]] 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64777 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64777 ']' 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64777 00:08:22.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64777) - No such process 00:08:22.227 Process with pid 64777 is not found 00:08:22.227 Process with pid 64795 is not found 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64777 is not found' 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64795 ]] 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64795 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64795 ']' 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64795 00:08:22.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64795) - No such process 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64795 is not found' 00:08:22.227 09:14:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:22.227 00:08:22.227 real 0m47.019s 00:08:22.227 user 1m19.026s 00:08:22.227 sys 0m5.820s 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.227 ************************************ 00:08:22.227 END TEST cpu_locks 00:08:22.227 ************************************ 00:08:22.227 09:14:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.227 09:14:08 event -- common/autotest_common.sh@1142 -- # return 0 00:08:22.227 00:08:22.227 real 1m18.778s 00:08:22.227 user 2m20.348s 00:08:22.227 sys 0m9.395s 00:08:22.227 09:14:08 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.227 ************************************ 00:08:22.227 END TEST event 00:08:22.227 ************************************ 00:08:22.227 09:14:08 event -- common/autotest_common.sh@10 -- # set +x 00:08:22.485 09:14:08 -- common/autotest_common.sh@1142 -- # return 0 00:08:22.485 09:14:08 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:22.485 09:14:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:22.485 09:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.485 09:14:08 -- common/autotest_common.sh@10 -- # set +x 00:08:22.485 ************************************ 00:08:22.485 START TEST thread 00:08:22.485 ************************************ 00:08:22.485 09:14:08 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:22.485 * Looking for test storage... 00:08:22.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:22.485 09:14:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:22.485 09:14:08 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:22.485 09:14:08 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.485 09:14:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.485 ************************************ 00:08:22.485 START TEST thread_poller_perf 00:08:22.485 ************************************ 00:08:22.485 09:14:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:22.485 [2024-07-12 09:14:08.751867] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:22.485 [2024-07-12 09:14:08.752018] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64969 ] 00:08:22.743 [2024-07-12 09:14:08.910345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.002 [2024-07-12 09:14:09.102936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.002 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:24.375 ====================================== 00:08:24.375 busy:2213716596 (cyc) 00:08:24.375 total_run_count: 285000 00:08:24.375 tsc_hz: 2200000000 (cyc) 00:08:24.375 ====================================== 00:08:24.375 poller_cost: 7767 (cyc), 3530 (nsec) 00:08:24.375 00:08:24.375 real 0m1.802s 00:08:24.375 user 0m1.594s 00:08:24.375 sys 0m0.096s 00:08:24.375 09:14:10 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.375 09:14:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 ************************************ 00:08:24.375 END TEST thread_poller_perf 00:08:24.375 ************************************ 00:08:24.375 09:14:10 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:24.375 09:14:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:24.375 09:14:10 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:24.375 09:14:10 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.375 09:14:10 thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 ************************************ 00:08:24.375 START TEST thread_poller_perf 00:08:24.375 ************************************ 00:08:24.375 09:14:10 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:24.375 [2024-07-12 09:14:10.613724] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.375 [2024-07-12 09:14:10.614160] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65011 ] 00:08:24.633 [2024-07-12 09:14:10.799837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.891 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:24.892 [2024-07-12 09:14:11.030986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.268 ====================================== 00:08:26.268 busy:2205271236 (cyc) 00:08:26.268 total_run_count: 3500000 00:08:26.268 tsc_hz: 2200000000 (cyc) 00:08:26.268 ====================================== 00:08:26.268 poller_cost: 630 (cyc), 286 (nsec) 00:08:26.268 ************************************ 00:08:26.268 END TEST thread_poller_perf 00:08:26.268 ************************************ 00:08:26.268 00:08:26.268 real 0m1.866s 00:08:26.268 user 0m1.636s 00:08:26.268 sys 0m0.118s 00:08:26.268 09:14:12 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.268 09:14:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 09:14:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:26.268 09:14:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:26.268 ************************************ 00:08:26.268 END TEST thread 00:08:26.268 ************************************ 00:08:26.268 00:08:26.268 real 0m3.856s 00:08:26.268 user 0m3.292s 00:08:26.268 sys 0m0.332s 00:08:26.268 09:14:12 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.268 09:14:12 thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 09:14:12 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.268 09:14:12 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:26.268 09:14:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.268 09:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.268 09:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 ************************************ 00:08:26.268 START TEST accel 00:08:26.268 ************************************ 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:26.268 * Looking for test storage... 00:08:26.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:26.268 09:14:12 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:26.268 09:14:12 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:26.268 09:14:12 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:26.268 09:14:12 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65092 00:08:26.268 09:14:12 accel -- accel/accel.sh@63 -- # waitforlisten 65092 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@829 -- # '[' -z 65092 ']' 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.268 09:14:12 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.268 09:14:12 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.268 09:14:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.268 09:14:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.268 09:14:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.268 09:14:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.268 09:14:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.268 09:14:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.268 09:14:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:26.268 09:14:12 accel -- accel/accel.sh@41 -- # jq -r . 00:08:26.526 [2024-07-12 09:14:12.738989] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:26.526 [2024-07-12 09:14:12.739168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65092 ] 00:08:26.784 [2024-07-12 09:14:12.911320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.042 [2024-07-12 09:14:13.143912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.608 09:14:13 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.608 09:14:13 accel -- common/autotest_common.sh@862 -- # return 0 00:08:27.608 09:14:13 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:27.608 09:14:13 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:27.608 09:14:13 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:27.608 09:14:13 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:27.608 09:14:13 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:27.608 09:14:13 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:27.608 09:14:13 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:27.608 09:14:13 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.608 09:14:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.866 09:14:13 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.866 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.866 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.866 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.867 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.867 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.867 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.867 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.867 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.867 09:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:27.867 09:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:08:27.867 09:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:27.867 09:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:27.867 09:14:14 accel -- accel/accel.sh@75 -- # killprocess 65092 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@948 -- # '[' -z 65092 ']' 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@952 -- # kill -0 65092 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@953 -- # uname 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65092 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.867 killing process with pid 65092 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65092' 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@967 -- # kill 65092 00:08:27.867 09:14:14 accel -- common/autotest_common.sh@972 -- # wait 65092 00:08:29.769 09:14:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:29.769 09:14:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:29.769 09:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.769 09:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.769 09:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 09:14:16 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:30.028 09:14:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:30.028 09:14:16 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.028 09:14:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 09:14:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:30.028 09:14:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:30.028 09:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:30.028 09:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.028 09:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 ************************************ 00:08:30.028 START TEST accel_missing_filename 00:08:30.028 ************************************ 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:30.028 09:14:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:30.028 09:14:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:30.028 [2024-07-12 09:14:16.304969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:30.028 [2024-07-12 09:14:16.305141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65162 ] 00:08:30.286 [2024-07-12 09:14:16.471134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.544 [2024-07-12 09:14:16.660203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.544 [2024-07-12 09:14:16.847702] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.117 [2024-07-12 09:14:17.306767] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:31.375 A filename is required. 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:31.375 ************************************ 00:08:31.375 END TEST accel_missing_filename 00:08:31.375 ************************************ 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:31.375 00:08:31.375 real 0m1.448s 00:08:31.375 user 0m1.237s 00:08:31.375 sys 0m0.150s 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.375 09:14:17 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 09:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.634 09:14:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.634 09:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:31.634 09:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.634 09:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 ************************************ 00:08:31.634 START TEST accel_compress_verify 00:08:31.634 ************************************ 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:31.634 09:14:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:31.634 09:14:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:31.634 [2024-07-12 09:14:17.806307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.634 [2024-07-12 09:14:17.806464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65199 ] 00:08:31.634 [2024-07-12 09:14:17.981426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.893 [2024-07-12 09:14:18.212221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.152 [2024-07-12 09:14:18.396786] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.720 [2024-07-12 09:14:18.853883] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:32.979 00:08:32.979 Compression does not support the verify option, aborting. 00:08:32.979 ************************************ 00:08:32.979 END TEST accel_compress_verify 00:08:32.979 ************************************ 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.979 00:08:32.979 real 0m1.489s 00:08:32.979 user 0m1.293s 00:08:32.979 sys 0m0.136s 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.979 09:14:19 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:32.979 09:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:32.979 09:14:19 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:32.979 09:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:32.979 09:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.979 09:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.979 ************************************ 00:08:32.979 START TEST accel_wrong_workload 00:08:32.979 ************************************ 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.979 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:32.979 09:14:19 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:33.238 Unsupported workload type: foobar 00:08:33.238 [2024-07-12 09:14:19.348798] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:33.238 accel_perf options: 00:08:33.238 [-h help message] 00:08:33.238 [-q queue depth per core] 00:08:33.238 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:33.238 [-T number of threads per core 00:08:33.238 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:33.238 [-t time in seconds] 00:08:33.238 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:33.238 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:33.238 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:33.238 [-l for compress/decompress workloads, name of uncompressed input file 00:08:33.238 [-S for crc32c workload, use this seed value (default 0) 00:08:33.238 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:33.238 [-f for fill workload, use this BYTE value (default 255) 00:08:33.238 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:33.238 [-y verify result if this switch is on] 00:08:33.238 [-a tasks to allocate per core (default: same value as -q)] 00:08:33.238 Can be used to spread operations across a wider range of memory. 00:08:33.238 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:33.238 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.238 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:33.238 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.238 00:08:33.238 real 0m0.073s 00:08:33.238 user 0m0.083s 00:08:33.238 sys 0m0.038s 00:08:33.238 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.238 ************************************ 00:08:33.238 END TEST accel_wrong_workload 00:08:33.238 09:14:19 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 ************************************ 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.238 09:14:19 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 ************************************ 00:08:33.238 START TEST accel_negative_buffers 00:08:33.238 ************************************ 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:33.238 09:14:19 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:33.238 -x option must be non-negative. 00:08:33.238 [2024-07-12 09:14:19.472058] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:33.238 accel_perf options: 00:08:33.238 [-h help message] 00:08:33.238 [-q queue depth per core] 00:08:33.238 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:33.238 [-T number of threads per core 00:08:33.238 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:33.238 [-t time in seconds] 00:08:33.238 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:33.238 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:33.238 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:33.238 [-l for compress/decompress workloads, name of uncompressed input file 00:08:33.238 [-S for crc32c workload, use this seed value (default 0) 00:08:33.238 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:33.238 [-f for fill workload, use this BYTE value (default 255) 00:08:33.238 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:33.238 [-y verify result if this switch is on] 00:08:33.238 [-a tasks to allocate per core (default: same value as -q)] 00:08:33.238 Can be used to spread operations across a wider range of memory. 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:33.238 00:08:33.238 real 0m0.075s 00:08:33.238 user 0m0.077s 00:08:33.238 sys 0m0.041s 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.238 ************************************ 00:08:33.238 END TEST accel_negative_buffers 00:08:33.238 ************************************ 00:08:33.238 09:14:19 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.238 09:14:19 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.238 09:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.238 ************************************ 00:08:33.238 START TEST accel_crc32c 00:08:33.238 ************************************ 00:08:33.238 09:14:19 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:33.238 09:14:19 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:33.497 [2024-07-12 09:14:19.589848] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:33.497 [2024-07-12 09:14:19.590049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65277 ] 00:08:33.497 [2024-07-12 09:14:19.765458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.754 [2024-07-12 09:14:19.998636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:34.012 09:14:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:35.914 09:14:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.914 00:08:35.914 real 0m2.504s 00:08:35.914 user 0m0.017s 00:08:35.914 sys 0m0.004s 00:08:35.914 09:14:22 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.914 09:14:22 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:35.914 ************************************ 00:08:35.914 END TEST accel_crc32c 00:08:35.914 ************************************ 00:08:35.914 09:14:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.914 09:14:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:35.914 09:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:35.914 09:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.914 09:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.914 ************************************ 00:08:35.914 START TEST accel_crc32c_C2 00:08:35.914 ************************************ 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:35.914 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:35.914 [2024-07-12 09:14:22.137104] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:35.914 [2024-07-12 09:14:22.138087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65322 ] 00:08:36.173 [2024-07-12 09:14:22.309039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.173 [2024-07-12 09:14:22.496163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:36.431 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.432 09:14:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:38.332 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.333 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:38.333 09:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.333 00:08:38.333 real 0m2.443s 00:08:38.333 user 0m0.014s 00:08:38.333 sys 0m0.003s 00:08:38.333 09:14:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.333 09:14:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:38.333 ************************************ 00:08:38.333 END TEST accel_crc32c_C2 00:08:38.333 ************************************ 00:08:38.333 09:14:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:38.333 09:14:24 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:38.333 09:14:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:38.333 09:14:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.333 09:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.333 ************************************ 00:08:38.333 START TEST accel_copy 00:08:38.333 ************************************ 00:08:38.333 09:14:24 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:38.333 09:14:24 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:38.333 [2024-07-12 09:14:24.633385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:38.333 [2024-07-12 09:14:24.633549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65364 ] 00:08:38.591 [2024-07-12 09:14:24.806901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.850 [2024-07-12 09:14:24.988286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.850 09:14:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:40.750 ************************************ 00:08:40.750 END TEST accel_copy 00:08:40.750 ************************************ 00:08:40.750 09:14:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.750 00:08:40.750 real 0m2.473s 00:08:40.750 user 0m2.222s 00:08:40.750 sys 0m0.154s 00:08:40.750 09:14:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.750 09:14:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:40.750 09:14:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:40.750 09:14:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:40.750 09:14:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:40.750 09:14:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.750 09:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 ************************************ 00:08:41.008 START TEST accel_fill 00:08:41.008 ************************************ 00:08:41.008 09:14:27 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:41.008 09:14:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:41.008 [2024-07-12 09:14:27.157370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.008 [2024-07-12 09:14:27.157546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65412 ] 00:08:41.008 [2024-07-12 09:14:27.330541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.266 [2024-07-12 09:14:27.565558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.524 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:41.525 09:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:43.428 09:14:29 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.428 00:08:43.428 real 0m2.504s 00:08:43.428 user 0m2.260s 00:08:43.428 sys 0m0.148s 00:08:43.428 09:14:29 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.428 09:14:29 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:43.428 ************************************ 00:08:43.428 END TEST accel_fill 00:08:43.428 ************************************ 00:08:43.428 09:14:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:43.428 09:14:29 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:43.428 09:14:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:43.428 09:14:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.428 09:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:43.428 ************************************ 00:08:43.428 START TEST accel_copy_crc32c 00:08:43.428 ************************************ 00:08:43.428 09:14:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:43.428 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:43.428 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:43.428 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.428 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:43.429 09:14:29 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:43.429 [2024-07-12 09:14:29.708014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:43.429 [2024-07-12 09:14:29.708144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65458 ] 00:08:43.687 [2024-07-12 09:14:29.876624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.002 [2024-07-12 09:14:30.101765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.002 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:44.003 09:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:45.909 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:45.910 ************************************ 00:08:45.910 END TEST accel_copy_crc32c 00:08:45.910 ************************************ 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.910 00:08:45.910 real 0m2.527s 00:08:45.910 user 0m0.014s 00:08:45.910 sys 0m0.005s 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.910 09:14:32 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:45.910 09:14:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:45.910 09:14:32 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:45.910 09:14:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:45.910 09:14:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.910 09:14:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:45.910 ************************************ 00:08:45.910 START TEST accel_copy_crc32c_C2 00:08:45.910 ************************************ 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:45.910 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:46.168 [2024-07-12 09:14:32.287418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:46.168 [2024-07-12 09:14:32.287555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65505 ] 00:08:46.168 [2024-07-12 09:14:32.450084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.426 [2024-07-12 09:14:32.635224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.685 09:14:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.585 00:08:48.585 real 0m2.521s 00:08:48.585 user 0m0.020s 00:08:48.585 sys 0m0.002s 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.585 ************************************ 00:08:48.585 END TEST accel_copy_crc32c_C2 00:08:48.585 ************************************ 00:08:48.585 09:14:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:48.585 09:14:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:48.585 09:14:34 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:48.585 09:14:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:48.585 09:14:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.585 09:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:48.585 ************************************ 00:08:48.585 START TEST accel_dualcast 00:08:48.585 ************************************ 00:08:48.585 09:14:34 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:48.585 09:14:34 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:48.585 [2024-07-12 09:14:34.858841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:48.585 [2024-07-12 09:14:34.858968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65551 ] 00:08:48.844 [2024-07-12 09:14:35.019425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.844 [2024-07-12 09:14:35.192565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:49.102 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:49.103 09:14:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.005 09:14:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:51.005 09:14:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:51.005 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:51.005 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.005 09:14:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:51.006 09:14:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:51.006 00:08:51.006 real 0m2.378s 00:08:51.006 user 0m2.149s 00:08:51.006 sys 0m0.133s 00:08:51.006 09:14:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.006 09:14:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:51.006 ************************************ 00:08:51.006 END TEST accel_dualcast 00:08:51.006 ************************************ 00:08:51.006 09:14:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:51.006 09:14:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:51.006 09:14:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:51.006 09:14:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.006 09:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:51.006 ************************************ 00:08:51.006 START TEST accel_compare 00:08:51.006 ************************************ 00:08:51.006 09:14:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:51.006 09:14:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:51.006 [2024-07-12 09:14:37.291900] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:51.006 [2024-07-12 09:14:37.292057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65598 ] 00:08:51.265 [2024-07-12 09:14:37.451200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.523 [2024-07-12 09:14:37.664790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.523 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.523 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:51.524 09:14:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:53.429 09:14:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:53.429 00:08:53.429 real 0m2.424s 00:08:53.429 user 0m2.197s 00:08:53.429 sys 0m0.133s 00:08:53.429 ************************************ 00:08:53.429 END TEST accel_compare 00:08:53.429 ************************************ 00:08:53.429 09:14:39 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.429 09:14:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:53.429 09:14:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:53.429 09:14:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:53.429 09:14:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:53.429 09:14:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.429 09:14:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:53.429 ************************************ 00:08:53.429 START TEST accel_xor 00:08:53.429 ************************************ 00:08:53.429 09:14:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:53.429 09:14:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:53.430 09:14:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:53.430 [2024-07-12 09:14:39.773910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:53.430 [2024-07-12 09:14:39.774103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65639 ] 00:08:53.688 [2024-07-12 09:14:39.959745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.946 [2024-07-12 09:14:40.145821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.204 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:54.205 09:14:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.105 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:56.106 00:08:56.106 real 0m2.475s 00:08:56.106 user 0m0.017s 00:08:56.106 sys 0m0.002s 00:08:56.106 ************************************ 00:08:56.106 END TEST accel_xor 00:08:56.106 09:14:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.106 09:14:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:56.106 ************************************ 00:08:56.106 09:14:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:56.106 09:14:42 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:56.106 09:14:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:56.106 09:14:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.106 09:14:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:56.106 ************************************ 00:08:56.106 START TEST accel_xor 00:08:56.106 ************************************ 00:08:56.106 09:14:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:56.106 09:14:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:56.106 [2024-07-12 09:14:42.296508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:56.106 [2024-07-12 09:14:42.296676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65686 ] 00:08:56.364 [2024-07-12 09:14:42.490481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.364 [2024-07-12 09:14:42.677934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:56.623 09:14:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:58.523 ************************************ 00:08:58.523 END TEST accel_xor 00:08:58.523 ************************************ 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:58.523 09:14:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.523 00:08:58.523 real 0m2.479s 00:08:58.523 user 0m2.222s 00:08:58.523 sys 0m0.162s 00:08:58.523 09:14:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.523 09:14:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:58.523 09:14:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:58.523 09:14:44 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:58.523 09:14:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:58.523 09:14:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.523 09:14:44 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.523 ************************************ 00:08:58.523 START TEST accel_dif_verify 00:08:58.523 ************************************ 00:08:58.523 09:14:44 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:58.523 09:14:44 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:58.523 [2024-07-12 09:14:44.823273] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:58.523 [2024-07-12 09:14:44.823435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65732 ] 00:08:58.781 [2024-07-12 09:14:44.997957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.038 [2024-07-12 09:14:45.184566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.038 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:59.039 09:14:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:00.938 09:14:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.938 00:09:00.938 real 0m2.487s 00:09:00.938 user 0m2.238s 00:09:00.938 sys 0m0.153s 00:09:00.938 ************************************ 00:09:00.938 END TEST accel_dif_verify 00:09:00.938 ************************************ 00:09:00.938 09:14:47 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.938 09:14:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:01.196 09:14:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:01.197 09:14:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:01.197 09:14:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:01.197 09:14:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.197 09:14:47 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.197 ************************************ 00:09:01.197 START TEST accel_dif_generate 00:09:01.197 ************************************ 00:09:01.197 09:14:47 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:01.197 09:14:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:01.197 [2024-07-12 09:14:47.356706] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:01.197 [2024-07-12 09:14:47.356889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65779 ] 00:09:01.455 [2024-07-12 09:14:47.556338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.455 [2024-07-12 09:14:47.788252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:01.725 09:14:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.655 09:14:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:03.656 09:14:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.656 00:09:03.656 real 0m2.533s 00:09:03.656 user 0m2.269s 00:09:03.656 sys 0m0.163s 00:09:03.656 ************************************ 00:09:03.656 END TEST accel_dif_generate 00:09:03.656 ************************************ 00:09:03.656 09:14:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.656 09:14:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:03.656 09:14:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:03.656 09:14:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:03.656 09:14:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:03.656 09:14:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.656 09:14:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.656 ************************************ 00:09:03.656 START TEST accel_dif_generate_copy 00:09:03.656 ************************************ 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:03.656 09:14:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:03.656 [2024-07-12 09:14:49.932265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:03.656 [2024-07-12 09:14:49.932437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65825 ] 00:09:03.912 [2024-07-12 09:14:50.112002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.169 [2024-07-12 09:14:50.341415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:04.427 09:14:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.327 00:09:06.327 real 0m2.506s 00:09:06.327 user 0m2.253s 00:09:06.327 sys 0m0.155s 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.327 09:14:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:06.327 ************************************ 00:09:06.327 END TEST accel_dif_generate_copy 00:09:06.327 ************************************ 00:09:06.327 09:14:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:06.327 09:14:52 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:06.327 09:14:52 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.327 09:14:52 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:06.327 09:14:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.327 09:14:52 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.327 ************************************ 00:09:06.327 START TEST accel_comp 00:09:06.327 ************************************ 00:09:06.327 09:14:52 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:06.327 09:14:52 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:06.327 [2024-07-12 09:14:52.486369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:06.327 [2024-07-12 09:14:52.486510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65872 ] 00:09:06.327 [2024-07-12 09:14:52.650922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.585 [2024-07-12 09:14:52.837618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:06.850 09:14:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:08.758 09:14:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:08.758 00:09:08.758 real 0m2.439s 00:09:08.758 user 0m2.203s 00:09:08.758 sys 0m0.140s 00:09:08.758 09:14:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.758 09:14:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:08.758 ************************************ 00:09:08.758 END TEST accel_comp 00:09:08.758 ************************************ 00:09:08.758 09:14:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:08.758 09:14:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.758 09:14:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:08.758 09:14:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.758 09:14:54 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.758 ************************************ 00:09:08.758 START TEST accel_decomp 00:09:08.758 ************************************ 00:09:08.758 09:14:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:08.758 09:14:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:08.758 [2024-07-12 09:14:54.987354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:08.759 [2024-07-12 09:14:54.987526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65913 ] 00:09:09.017 [2024-07-12 09:14:55.161532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.017 [2024-07-12 09:14:55.350170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:09.276 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:09.277 09:14:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.178 09:14:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:11.178 09:14:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:11.178 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:11.178 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:11.179 09:14:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:11.179 00:09:11.179 real 0m2.443s 00:09:11.179 user 0m2.211s 00:09:11.179 sys 0m0.136s 00:09:11.179 09:14:57 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.179 09:14:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:11.179 ************************************ 00:09:11.179 END TEST accel_decomp 00:09:11.179 ************************************ 00:09:11.179 09:14:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:11.179 09:14:57 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:11.179 09:14:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:11.179 09:14:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.179 09:14:57 accel -- common/autotest_common.sh@10 -- # set +x 00:09:11.179 ************************************ 00:09:11.179 START TEST accel_decomp_full 00:09:11.179 ************************************ 00:09:11.179 09:14:57 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:09:11.179 09:14:57 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:09:11.179 [2024-07-12 09:14:57.488043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.179 [2024-07-12 09:14:57.488279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65965 ] 00:09:11.437 [2024-07-12 09:14:57.655179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.696 [2024-07-12 09:14:57.852632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.696 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.697 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:11.955 09:14:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:13.858 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:13.859 09:14:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:13.859 09:14:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.859 09:14:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:13.859 09:14:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.859 00:09:13.859 real 0m2.477s 00:09:13.859 user 0m2.223s 00:09:13.859 sys 0m0.154s 00:09:13.859 09:14:59 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.859 09:14:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:13.859 ************************************ 00:09:13.859 END TEST accel_decomp_full 00:09:13.859 ************************************ 00:09:13.859 09:14:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:13.859 09:14:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:13.859 09:14:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:13.859 09:14:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.859 09:14:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:13.859 ************************************ 00:09:13.859 START TEST accel_decomp_mcore 00:09:13.859 ************************************ 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:13.859 09:14:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:13.859 [2024-07-12 09:14:59.990088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:13.859 [2024-07-12 09:14:59.990297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66006 ] 00:09:13.859 [2024-07-12 09:15:00.160845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.117 [2024-07-12 09:15:00.352943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.117 [2024-07-12 09:15:00.353106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.117 [2024-07-12 09:15:00.353279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.117 [2024-07-12 09:15:00.353355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:14.375 09:15:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:16.276 00:09:16.276 real 0m2.580s 00:09:16.276 user 0m7.600s 00:09:16.276 sys 0m0.169s 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.276 09:15:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:16.276 ************************************ 00:09:16.276 END TEST accel_decomp_mcore 00:09:16.276 ************************************ 00:09:16.276 09:15:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:16.276 09:15:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:16.276 09:15:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:16.276 09:15:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.276 09:15:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:16.276 ************************************ 00:09:16.276 START TEST accel_decomp_full_mcore 00:09:16.276 ************************************ 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:16.276 09:15:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:16.276 [2024-07-12 09:15:02.621159] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:16.276 [2024-07-12 09:15:02.621347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66056 ] 00:09:16.581 [2024-07-12 09:15:02.798004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.869 [2024-07-12 09:15:03.027624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.869 [2024-07-12 09:15:03.027743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.869 [2024-07-12 09:15:03.027825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.869 [2024-07-12 09:15:03.028103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.131 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:17.132 09:15:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:19.033 00:09:19.033 real 0m2.555s 00:09:19.033 user 0m0.014s 00:09:19.033 sys 0m0.006s 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.033 09:15:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:19.033 ************************************ 00:09:19.033 END TEST accel_decomp_full_mcore 00:09:19.033 ************************************ 00:09:19.033 09:15:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:19.033 09:15:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:19.033 09:15:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:19.033 09:15:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.034 09:15:05 accel -- common/autotest_common.sh@10 -- # set +x 00:09:19.034 ************************************ 00:09:19.034 START TEST accel_decomp_mthread 00:09:19.034 ************************************ 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:19.034 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:19.034 [2024-07-12 09:15:05.213368] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:19.034 [2024-07-12 09:15:05.213514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66105 ] 00:09:19.034 [2024-07-12 09:15:05.376121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.292 [2024-07-12 09:15:05.561016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:19.551 09:15:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:21.449 ************************************ 00:09:21.449 END TEST accel_decomp_mthread 00:09:21.449 ************************************ 00:09:21.449 00:09:21.449 real 0m2.450s 00:09:21.449 user 0m2.216s 00:09:21.449 sys 0m0.139s 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.449 09:15:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:21.449 09:15:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:21.449 09:15:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:21.449 09:15:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:21.449 09:15:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.449 09:15:07 accel -- common/autotest_common.sh@10 -- # set +x 00:09:21.449 ************************************ 00:09:21.449 START TEST accel_decomp_full_mthread 00:09:21.449 ************************************ 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:21.449 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:21.450 09:15:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:21.450 [2024-07-12 09:15:07.710986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:21.450 [2024-07-12 09:15:07.711364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66152 ] 00:09:21.707 [2024-07-12 09:15:07.884766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.965 [2024-07-12 09:15:08.114889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.223 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:22.224 09:15:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:24.178 00:09:24.178 real 0m2.596s 00:09:24.178 user 0m2.343s 00:09:24.178 sys 0m0.152s 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.178 09:15:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:24.178 ************************************ 00:09:24.178 END TEST accel_decomp_full_mthread 00:09:24.178 ************************************ 00:09:24.178 09:15:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:24.178 09:15:10 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:24.178 09:15:10 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:24.178 09:15:10 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:24.178 09:15:10 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:24.178 09:15:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.178 09:15:10 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.178 09:15:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.178 09:15:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.178 09:15:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.178 09:15:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.178 09:15:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.178 09:15:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:24.178 09:15:10 accel -- accel/accel.sh@41 -- # jq -r . 00:09:24.178 ************************************ 00:09:24.178 START TEST accel_dif_functional_tests 00:09:24.178 ************************************ 00:09:24.178 09:15:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:24.178 [2024-07-12 09:15:10.394822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:24.178 [2024-07-12 09:15:10.394984] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66199 ] 00:09:24.436 [2024-07-12 09:15:10.558396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.695 [2024-07-12 09:15:10.793409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.695 [2024-07-12 09:15:10.793532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.695 [2024-07-12 09:15:10.793534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.954 00:09:24.954 00:09:24.954 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.954 http://cunit.sourceforge.net/ 00:09:24.954 00:09:24.954 00:09:24.954 Suite: accel_dif 00:09:24.954 Test: verify: DIF generated, GUARD check ...passed 00:09:24.954 Test: verify: DIF generated, APPTAG check ...passed 00:09:24.954 Test: verify: DIF generated, REFTAG check ...passed 00:09:24.954 Test: verify: DIF not generated, GUARD check ...[2024-07-12 09:15:11.135154] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:24.954 passed 00:09:24.954 Test: verify: DIF not generated, APPTAG check ...passed 00:09:24.954 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 09:15:11.135405] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:24.954 passed 00:09:24.954 Test: verify: APPTAG correct, APPTAG check ...[2024-07-12 09:15:11.135529] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:24.954 passed 00:09:24.954 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:09:24.954 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-12 09:15:11.135803] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:24.954 passed 00:09:24.954 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:24.954 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:24.954 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 09:15:11.136279] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:24.954 passed 00:09:24.954 Test: verify copy: DIF generated, GUARD check ...passed 00:09:24.954 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:24.954 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:24.954 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 09:15:11.137055] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:24.954 passed 00:09:24.954 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 09:15:11.137291] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:24.954 passed 00:09:24.954 Test: verify copy: DIF not generated, REFTAG check ...passed 00:09:24.954 Test: generate copy: DIF generated, GUARD check ...[2024-07-12 09:15:11.137546] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:24.954 passed 00:09:24.954 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:24.954 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:24.954 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:24.954 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:24.954 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:24.954 Test: generate copy: iovecs-len validate ...passed 00:09:24.954 Test: generate copy: buffer alignment validate ...[2024-07-12 09:15:11.138581] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:24.954 passed 00:09:24.954 00:09:24.954 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.954 suites 1 1 n/a 0 0 00:09:24.954 tests 26 26 26 0 0 00:09:24.954 asserts 115 115 115 0 n/a 00:09:24.954 00:09:24.954 Elapsed time = 0.010 seconds 00:09:26.328 00:09:26.328 real 0m1.953s 00:09:26.328 user 0m3.759s 00:09:26.328 sys 0m0.199s 00:09:26.328 09:15:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.328 ************************************ 00:09:26.328 END TEST accel_dif_functional_tests 00:09:26.328 ************************************ 00:09:26.328 09:15:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:26.328 09:15:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:26.328 00:09:26.328 real 0m59.773s 00:09:26.328 user 1m5.706s 00:09:26.328 sys 0m4.806s 00:09:26.328 09:15:12 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.328 09:15:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:26.328 ************************************ 00:09:26.328 END TEST accel 00:09:26.328 ************************************ 00:09:26.328 09:15:12 -- common/autotest_common.sh@1142 -- # return 0 00:09:26.328 09:15:12 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:26.328 09:15:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.328 09:15:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.328 09:15:12 -- common/autotest_common.sh@10 -- # set +x 00:09:26.328 ************************************ 00:09:26.328 START TEST accel_rpc 00:09:26.328 ************************************ 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:26.328 * Looking for test storage... 00:09:26.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:26.328 09:15:12 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:26.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.328 09:15:12 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66281 00:09:26.328 09:15:12 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:26.328 09:15:12 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66281 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66281 ']' 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.328 09:15:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.328 [2024-07-12 09:15:12.524648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:26.328 [2024-07-12 09:15:12.525031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66281 ] 00:09:26.586 [2024-07-12 09:15:12.688742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.586 [2024-07-12 09:15:12.874340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.151 09:15:13 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.151 09:15:13 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:27.151 09:15:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:27.151 09:15:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:27.151 09:15:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:27.151 09:15:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:27.151 09:15:13 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:27.151 09:15:13 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:27.151 09:15:13 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.151 09:15:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.151 ************************************ 00:09:27.151 START TEST accel_assign_opcode 00:09:27.151 ************************************ 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:27.151 [2024-07-12 09:15:13.463458] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:27.151 [2024-07-12 09:15:13.471415] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.151 09:15:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.084 software 00:09:28.084 00:09:28.084 real 0m0.746s 00:09:28.084 user 0m0.052s 00:09:28.084 sys 0m0.009s 00:09:28.084 ************************************ 00:09:28.084 END TEST accel_assign_opcode 00:09:28.084 ************************************ 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.084 09:15:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:28.084 09:15:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66281 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66281 ']' 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66281 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66281 00:09:28.084 killing process with pid 66281 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66281' 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 66281 00:09:28.084 09:15:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 66281 00:09:30.615 ************************************ 00:09:30.615 END TEST accel_rpc 00:09:30.615 ************************************ 00:09:30.615 00:09:30.615 real 0m4.047s 00:09:30.615 user 0m4.105s 00:09:30.615 sys 0m0.448s 00:09:30.615 09:15:16 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.615 09:15:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 09:15:16 -- common/autotest_common.sh@1142 -- # return 0 00:09:30.615 09:15:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:30.615 09:15:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.615 09:15:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.615 09:15:16 -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 ************************************ 00:09:30.615 START TEST app_cmdline 00:09:30.615 ************************************ 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:30.615 * Looking for test storage... 00:09:30.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:30.615 09:15:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:30.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.615 09:15:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66392 00:09:30.615 09:15:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:30.615 09:15:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66392 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66392 ']' 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.615 09:15:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 [2024-07-12 09:15:16.620903] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:30.615 [2024-07-12 09:15:16.621147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66392 ] 00:09:30.615 [2024-07-12 09:15:16.791956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.873 [2024-07-12 09:15:17.020694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.809 09:15:17 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.809 09:15:17 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:31.809 09:15:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:31.809 { 00:09:31.809 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:09:31.809 "fields": { 00:09:31.809 "major": 24, 00:09:31.809 "minor": 9, 00:09:31.809 "patch": 0, 00:09:31.809 "suffix": "-pre", 00:09:31.809 "commit": "719d03c6a" 00:09:31.809 } 00:09:31.809 } 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:31.809 09:15:18 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.809 09:15:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:31.809 09:15:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:31.809 09:15:18 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.067 09:15:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:32.067 09:15:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:32.067 09:15:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:32.067 09:15:18 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:32.325 request: 00:09:32.325 { 00:09:32.325 "method": "env_dpdk_get_mem_stats", 00:09:32.325 "req_id": 1 00:09:32.325 } 00:09:32.325 Got JSON-RPC error response 00:09:32.325 response: 00:09:32.325 { 00:09:32.325 "code": -32601, 00:09:32.325 "message": "Method not found" 00:09:32.325 } 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.325 09:15:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66392 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66392 ']' 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66392 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66392 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.325 killing process with pid 66392 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66392' 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@967 -- # kill 66392 00:09:32.325 09:15:18 app_cmdline -- common/autotest_common.sh@972 -- # wait 66392 00:09:34.853 00:09:34.853 real 0m4.208s 00:09:34.853 user 0m4.833s 00:09:34.853 sys 0m0.493s 00:09:34.853 ************************************ 00:09:34.853 END TEST app_cmdline 00:09:34.853 ************************************ 00:09:34.853 09:15:20 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.853 09:15:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:34.853 09:15:20 -- common/autotest_common.sh@1142 -- # return 0 00:09:34.853 09:15:20 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:34.853 09:15:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.853 09:15:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.853 09:15:20 -- common/autotest_common.sh@10 -- # set +x 00:09:34.853 ************************************ 00:09:34.853 START TEST version 00:09:34.853 ************************************ 00:09:34.853 09:15:20 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:34.853 * Looking for test storage... 00:09:34.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:34.853 09:15:20 version -- app/version.sh@17 -- # get_header_version major 00:09:34.853 09:15:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # cut -f2 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:34.854 09:15:20 version -- app/version.sh@17 -- # major=24 00:09:34.854 09:15:20 version -- app/version.sh@18 -- # get_header_version minor 00:09:34.854 09:15:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # cut -f2 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:34.854 09:15:20 version -- app/version.sh@18 -- # minor=9 00:09:34.854 09:15:20 version -- app/version.sh@19 -- # get_header_version patch 00:09:34.854 09:15:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # cut -f2 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:34.854 09:15:20 version -- app/version.sh@19 -- # patch=0 00:09:34.854 09:15:20 version -- app/version.sh@20 -- # get_header_version suffix 00:09:34.854 09:15:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # cut -f2 00:09:34.854 09:15:20 version -- app/version.sh@14 -- # tr -d '"' 00:09:34.854 09:15:20 version -- app/version.sh@20 -- # suffix=-pre 00:09:34.854 09:15:20 version -- app/version.sh@22 -- # version=24.9 00:09:34.854 09:15:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:34.854 09:15:20 version -- app/version.sh@28 -- # version=24.9rc0 00:09:34.854 09:15:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:34.854 09:15:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:34.854 09:15:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:34.854 09:15:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:34.854 00:09:34.854 real 0m0.138s 00:09:34.854 user 0m0.069s 00:09:34.854 sys 0m0.097s 00:09:34.854 09:15:20 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.854 09:15:20 version -- common/autotest_common.sh@10 -- # set +x 00:09:34.854 ************************************ 00:09:34.854 END TEST version 00:09:34.854 ************************************ 00:09:34.854 09:15:20 -- common/autotest_common.sh@1142 -- # return 0 00:09:34.854 09:15:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:34.854 09:15:20 -- spdk/autotest.sh@198 -- # uname -s 00:09:34.854 09:15:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:34.854 09:15:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:34.854 09:15:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:34.854 09:15:20 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:09:34.854 09:15:20 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:34.854 09:15:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:34.854 09:15:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.854 09:15:20 -- common/autotest_common.sh@10 -- # set +x 00:09:34.854 ************************************ 00:09:34.854 START TEST blockdev_nvme 00:09:34.854 ************************************ 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:34.854 * Looking for test storage... 00:09:34.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:34.854 09:15:20 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66570 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66570 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66570 ']' 00:09:34.854 09:15:20 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.854 09:15:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.854 [2024-07-12 09:15:21.056907] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:34.854 [2024-07-12 09:15:21.057062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66570 ] 00:09:35.111 [2024-07-12 09:15:21.217928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.111 [2024-07-12 09:15:21.402550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.044 09:15:22 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.044 09:15:22 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:09:36.044 09:15:22 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:36.044 09:15:22 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:09:36.044 09:15:22 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:36.044 09:15:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:36.044 09:15:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:36.044 09:15:22 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:36.044 09:15:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.044 09:15:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:36.319 09:15:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.319 09:15:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.579 09:15:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.579 09:15:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:36.579 09:15:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:36.580 09:15:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e81c983f-a216-4943-9320-bf5f25fdf27e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e81c983f-a216-4943-9320-bf5f25fdf27e",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7cd6645e-13d0-4b28-a865-34ba58e78781"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7cd6645e-13d0-4b28-a865-34ba58e78781",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ffb8c475-3b22-49e4-87db-00067b26d578"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffb8c475-3b22-49e4-87db-00067b26d578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "731bb882-3c00-4f53-999d-56dc73c18325"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "731bb882-3c00-4f53-999d-56dc73c18325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "325b6944-966f-4d98-b309-6194cc044b69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "325b6944-966f-4d98-b309-6194cc044b69",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f5abb434-e931-4523-96f9-da85596c6eb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f5abb434-e931-4523-96f9-da85596c6eb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:36.580 09:15:22 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:36.580 09:15:22 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:09:36.580 09:15:22 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:36.580 09:15:22 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66570 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66570 ']' 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66570 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66570 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.580 killing process with pid 66570 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66570' 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66570 00:09:36.580 09:15:22 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66570 00:09:39.110 09:15:24 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:39.110 09:15:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:39.110 09:15:24 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:39.110 09:15:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.110 09:15:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.110 ************************************ 00:09:39.110 START TEST bdev_hello_world 00:09:39.110 ************************************ 00:09:39.110 09:15:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:39.110 [2024-07-12 09:15:24.929555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:39.110 [2024-07-12 09:15:24.929717] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66660 ] 00:09:39.110 [2024-07-12 09:15:25.096660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.110 [2024-07-12 09:15:25.286407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.675 [2024-07-12 09:15:25.896753] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:39.675 [2024-07-12 09:15:25.896828] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:39.675 [2024-07-12 09:15:25.896864] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:39.675 [2024-07-12 09:15:25.899934] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:39.675 [2024-07-12 09:15:25.900458] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:39.675 [2024-07-12 09:15:25.900501] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:39.675 [2024-07-12 09:15:25.900743] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:39.675 00:09:39.675 [2024-07-12 09:15:25.900787] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:40.609 00:09:40.609 real 0m2.079s 00:09:40.609 user 0m1.754s 00:09:40.609 sys 0m0.217s 00:09:40.609 09:15:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.609 ************************************ 00:09:40.609 END TEST bdev_hello_world 00:09:40.609 09:15:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 ************************************ 00:09:40.868 09:15:26 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:40.868 09:15:26 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:40.868 09:15:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.868 09:15:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.868 09:15:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.868 ************************************ 00:09:40.868 START TEST bdev_bounds 00:09:40.868 ************************************ 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66702 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.868 Process bdevio pid: 66702 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66702' 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66702 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66702 ']' 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.868 09:15:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:40.868 [2024-07-12 09:15:27.070729] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:40.868 [2024-07-12 09:15:27.070902] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66702 ] 00:09:41.126 [2024-07-12 09:15:27.245219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.126 [2024-07-12 09:15:27.469060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.126 [2024-07-12 09:15:27.469139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.126 [2024-07-12 09:15:27.469151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.060 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.060 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:09:42.060 09:15:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:42.060 I/O targets: 00:09:42.060 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:42.060 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:42.060 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:42.060 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:42.060 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:42.060 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:42.060 00:09:42.060 00:09:42.060 CUnit - A unit testing framework for C - Version 2.1-3 00:09:42.060 http://cunit.sourceforge.net/ 00:09:42.060 00:09:42.060 00:09:42.060 Suite: bdevio tests on: Nvme3n1 00:09:42.060 Test: blockdev write read block ...passed 00:09:42.060 Test: blockdev write zeroes read block ...passed 00:09:42.060 Test: blockdev write zeroes read no split ...passed 00:09:42.060 Test: blockdev write zeroes read split ...passed 00:09:42.060 Test: blockdev write zeroes read split partial ...passed 00:09:42.060 Test: blockdev reset ...[2024-07-12 09:15:28.333209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:42.060 passed 00:09:42.060 Test: blockdev write read 8 blocks ...[2024-07-12 09:15:28.337080] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.060 passed 00:09:42.060 Test: blockdev write read size > 128k ...passed 00:09:42.060 Test: blockdev write read invalid size ...passed 00:09:42.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.060 Test: blockdev write read max offset ...passed 00:09:42.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.060 Test: blockdev writev readv 8 blocks ...passed 00:09:42.060 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.060 Test: blockdev writev readv block ...passed 00:09:42.060 Test: blockdev writev readv size > 128k ...passed 00:09:42.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.060 Test: blockdev comparev and writev ...[2024-07-12 09:15:28.344303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26820a000 len:0x1000 00:09:42.060 [2024-07-12 09:15:28.344365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:42.060 passed 00:09:42.060 Test: blockdev nvme passthru rw ...passed 00:09:42.060 Test: blockdev nvme passthru vendor specific ...passed 00:09:42.060 Test: blockdev nvme admin passthru ...[2024-07-12 09:15:28.345213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:42.060 [2024-07-12 09:15:28.345258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:42.060 passed 00:09:42.060 Test: blockdev copy ...passed 00:09:42.060 Suite: bdevio tests on: Nvme2n3 00:09:42.060 Test: blockdev write read block ...passed 00:09:42.060 Test: blockdev write zeroes read block ...passed 00:09:42.060 Test: blockdev write zeroes read no split ...passed 00:09:42.060 Test: blockdev write zeroes read split ...passed 00:09:42.318 Test: blockdev write zeroes read split partial ...passed 00:09:42.318 Test: blockdev reset ...[2024-07-12 09:15:28.411537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:42.318 passed 00:09:42.318 Test: blockdev write read 8 blocks ...[2024-07-12 09:15:28.415761] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.318 passed 00:09:42.318 Test: blockdev write read size > 128k ...passed 00:09:42.318 Test: blockdev write read invalid size ...passed 00:09:42.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.318 Test: blockdev write read max offset ...passed 00:09:42.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.318 Test: blockdev writev readv 8 blocks ...passed 00:09:42.318 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.318 Test: blockdev writev readv block ...passed 00:09:42.318 Test: blockdev writev readv size > 128k ...passed 00:09:42.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.318 Test: blockdev comparev and writev ...[2024-07-12 09:15:28.423170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x277a04000 len:0x1000 00:09:42.318 [2024-07-12 09:15:28.423242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:42.318 passed 00:09:42.318 Test: blockdev nvme passthru rw ...passed 00:09:42.318 Test: blockdev nvme passthru vendor specific ...passed 00:09:42.319 Test: blockdev nvme admin passthru ...[2024-07-12 09:15:28.424020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:42.319 [2024-07-12 09:15:28.424060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:42.319 passed 00:09:42.319 Test: blockdev copy ...passed 00:09:42.319 Suite: bdevio tests on: Nvme2n2 00:09:42.319 Test: blockdev write read block ...passed 00:09:42.319 Test: blockdev write zeroes read block ...passed 00:09:42.319 Test: blockdev write zeroes read no split ...passed 00:09:42.319 Test: blockdev write zeroes read split ...passed 00:09:42.319 Test: blockdev write zeroes read split partial ...passed 00:09:42.319 Test: blockdev reset ...[2024-07-12 09:15:28.490717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:42.319 passed 00:09:42.319 Test: blockdev write read 8 blocks ...[2024-07-12 09:15:28.495528] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.319 passed 00:09:42.319 Test: blockdev write read size > 128k ...passed 00:09:42.319 Test: blockdev write read invalid size ...passed 00:09:42.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.319 Test: blockdev write read max offset ...passed 00:09:42.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.319 Test: blockdev writev readv 8 blocks ...passed 00:09:42.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.319 Test: blockdev writev readv block ...passed 00:09:42.319 Test: blockdev writev readv size > 128k ...passed 00:09:42.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.319 Test: blockdev comparev and writev ...[2024-07-12 09:15:28.502901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273c3a000 len:0x1000 00:09:42.319 [2024-07-12 09:15:28.502968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:42.319 passed 00:09:42.319 Test: blockdev nvme passthru rw ...passed 00:09:42.319 Test: blockdev nvme passthru vendor specific ...passed 00:09:42.319 Test: blockdev nvme admin passthru ...[2024-07-12 09:15:28.503689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:42.319 [2024-07-12 09:15:28.503730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:42.319 passed 00:09:42.319 Test: blockdev copy ...passed 00:09:42.319 Suite: bdevio tests on: Nvme2n1 00:09:42.319 Test: blockdev write read block ...passed 00:09:42.319 Test: blockdev write zeroes read block ...passed 00:09:42.319 Test: blockdev write zeroes read no split ...passed 00:09:42.319 Test: blockdev write zeroes read split ...passed 00:09:42.319 Test: blockdev write zeroes read split partial ...passed 00:09:42.319 Test: blockdev reset ...[2024-07-12 09:15:28.582566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:42.319 passed 00:09:42.319 Test: blockdev write read 8 blocks ...[2024-07-12 09:15:28.586826] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.319 passed 00:09:42.319 Test: blockdev write read size > 128k ...passed 00:09:42.319 Test: blockdev write read invalid size ...passed 00:09:42.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.319 Test: blockdev write read max offset ...passed 00:09:42.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.319 Test: blockdev writev readv 8 blocks ...passed 00:09:42.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.319 Test: blockdev writev readv block ...passed 00:09:42.319 Test: blockdev writev readv size > 128k ...passed 00:09:42.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.319 Test: blockdev comparev and writev ...[2024-07-12 09:15:28.594757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273c34000 len:0x1000 00:09:42.319 [2024-07-12 09:15:28.594817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:42.319 passed 00:09:42.319 Test: blockdev nvme passthru rw ...passed 00:09:42.319 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:15:28.595635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:42.319 [2024-07-12 09:15:28.595676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:42.319 passed 00:09:42.319 Test: blockdev nvme admin passthru ...passed 00:09:42.319 Test: blockdev copy ...passed 00:09:42.319 Suite: bdevio tests on: Nvme1n1 00:09:42.319 Test: blockdev write read block ...passed 00:09:42.319 Test: blockdev write zeroes read block ...passed 00:09:42.319 Test: blockdev write zeroes read no split ...passed 00:09:42.319 Test: blockdev write zeroes read split ...passed 00:09:42.577 Test: blockdev write zeroes read split partial ...passed 00:09:42.577 Test: blockdev reset ...[2024-07-12 09:15:28.678210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:42.577 [2024-07-12 09:15:28.681961] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.577 passed 00:09:42.577 Test: blockdev write read 8 blocks ...passed 00:09:42.577 Test: blockdev write read size > 128k ...passed 00:09:42.577 Test: blockdev write read invalid size ...passed 00:09:42.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.577 Test: blockdev write read max offset ...passed 00:09:42.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.577 Test: blockdev writev readv 8 blocks ...passed 00:09:42.577 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.577 Test: blockdev writev readv block ...passed 00:09:42.577 Test: blockdev writev readv size > 128k ...passed 00:09:42.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.577 Test: blockdev comparev and writev ...passed 00:09:42.577 Test: blockdev nvme passthru rw ...[2024-07-12 09:15:28.690261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273c30000 len:0x1000 00:09:42.577 [2024-07-12 09:15:28.690329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:42.577 passed 00:09:42.577 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:15:28.691086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:42.577 passed 00:09:42.577 Test: blockdev nvme admin passthru ...[2024-07-12 09:15:28.691132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:42.577 passed 00:09:42.577 Test: blockdev copy ...passed 00:09:42.577 Suite: bdevio tests on: Nvme0n1 00:09:42.577 Test: blockdev write read block ...passed 00:09:42.577 Test: blockdev write zeroes read block ...passed 00:09:42.577 Test: blockdev write zeroes read no split ...passed 00:09:42.577 Test: blockdev write zeroes read split ...passed 00:09:42.577 Test: blockdev write zeroes read split partial ...passed 00:09:42.577 Test: blockdev reset ...[2024-07-12 09:15:28.774396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:42.577 passed 00:09:42.577 Test: blockdev write read 8 blocks ...[2024-07-12 09:15:28.778155] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:42.577 passed 00:09:42.577 Test: blockdev write read size > 128k ...passed 00:09:42.577 Test: blockdev write read invalid size ...passed 00:09:42.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.577 Test: blockdev write read max offset ...passed 00:09:42.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.577 Test: blockdev writev readv 8 blocks ...passed 00:09:42.577 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.577 Test: blockdev writev readv block ...passed 00:09:42.577 Test: blockdev writev readv size > 128k ...passed 00:09:42.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.577 Test: blockdev comparev and writev ...passed 00:09:42.577 Test: blockdev nvme passthru rw ...[2024-07-12 09:15:28.785581] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:42.577 separate metadata which is not supported yet. 00:09:42.577 passed 00:09:42.577 Test: blockdev nvme passthru vendor specific ...passed 00:09:42.577 Test: blockdev nvme admin passthru ...[2024-07-12 09:15:28.786181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:42.577 [2024-07-12 09:15:28.786245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:42.577 passed 00:09:42.577 Test: blockdev copy ...passed 00:09:42.577 00:09:42.577 Run Summary: Type Total Ran Passed Failed Inactive 00:09:42.577 suites 6 6 n/a 0 0 00:09:42.577 tests 138 138 138 0 0 00:09:42.577 asserts 893 893 893 0 n/a 00:09:42.577 00:09:42.577 Elapsed time = 1.426 seconds 00:09:42.577 0 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66702 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66702 ']' 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66702 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66702 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.577 killing process with pid 66702 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66702' 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66702 00:09:42.577 09:15:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66702 00:09:43.509 09:15:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:43.509 00:09:43.509 real 0m2.843s 00:09:43.509 user 0m7.027s 00:09:43.509 sys 0m0.335s 00:09:43.509 09:15:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.509 09:15:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:43.509 ************************************ 00:09:43.509 END TEST bdev_bounds 00:09:43.509 ************************************ 00:09:43.509 09:15:29 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:43.509 09:15:29 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:43.509 09:15:29 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:43.509 09:15:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.509 09:15:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.767 ************************************ 00:09:43.767 START TEST bdev_nbd 00:09:43.767 ************************************ 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66767 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66767 /var/tmp/spdk-nbd.sock 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66767 ']' 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.767 09:15:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:43.767 [2024-07-12 09:15:29.995537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:43.767 [2024-07-12 09:15:29.995697] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.026 [2024-07-12 09:15:30.168067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.026 [2024-07-12 09:15:30.355786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.963 1+0 records in 00:09:44.963 1+0 records out 00:09:44.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775687 s, 5.3 MB/s 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:44.963 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.531 1+0 records in 00:09:45.531 1+0 records out 00:09:45.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575913 s, 7.1 MB/s 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:45.531 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.791 1+0 records in 00:09:45.791 1+0 records out 00:09:45.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737199 s, 5.6 MB/s 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:45.791 09:15:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.050 1+0 records in 00:09:46.050 1+0 records out 00:09:46.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654964 s, 6.3 MB/s 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:46.050 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.309 1+0 records in 00:09:46.309 1+0 records out 00:09:46.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797611 s, 5.1 MB/s 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:46.309 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:46.567 1+0 records in 00:09:46.567 1+0 records out 00:09:46.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832068 s, 4.9 MB/s 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:46.567 09:15:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd0", 00:09:46.827 "bdev_name": "Nvme0n1" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd1", 00:09:46.827 "bdev_name": "Nvme1n1" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd2", 00:09:46.827 "bdev_name": "Nvme2n1" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd3", 00:09:46.827 "bdev_name": "Nvme2n2" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd4", 00:09:46.827 "bdev_name": "Nvme2n3" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd5", 00:09:46.827 "bdev_name": "Nvme3n1" 00:09:46.827 } 00:09:46.827 ]' 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd0", 00:09:46.827 "bdev_name": "Nvme0n1" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd1", 00:09:46.827 "bdev_name": "Nvme1n1" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd2", 00:09:46.827 "bdev_name": "Nvme2n1" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd3", 00:09:46.827 "bdev_name": "Nvme2n2" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd4", 00:09:46.827 "bdev_name": "Nvme2n3" 00:09:46.827 }, 00:09:46.827 { 00:09:46.827 "nbd_device": "/dev/nbd5", 00:09:46.827 "bdev_name": "Nvme3n1" 00:09:46.827 } 00:09:46.827 ]' 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.827 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.086 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.344 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.910 09:15:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.910 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.168 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:48.425 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:48.425 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:48.425 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:48.425 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.425 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.426 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:48.426 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.426 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.426 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.426 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.426 09:15:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:48.993 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:48.993 /dev/nbd0 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.252 1+0 records in 00:09:49.252 1+0 records out 00:09:49.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604188 s, 6.8 MB/s 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:49.252 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:49.512 /dev/nbd1 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.512 1+0 records in 00:09:49.512 1+0 records out 00:09:49.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794455 s, 5.2 MB/s 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:49.512 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:49.771 /dev/nbd10 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.771 1+0 records in 00:09:49.771 1+0 records out 00:09:49.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555287 s, 7.4 MB/s 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:49.771 09:15:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:50.030 /dev/nbd11 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.030 1+0 records in 00:09:50.030 1+0 records out 00:09:50.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730665 s, 5.6 MB/s 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:50.030 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:50.327 /dev/nbd12 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.327 1+0 records in 00:09:50.327 1+0 records out 00:09:50.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663614 s, 6.2 MB/s 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:50.327 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:50.608 /dev/nbd13 00:09:50.608 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:50.608 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:50.608 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.609 1+0 records in 00:09:50.609 1+0 records out 00:09:50.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588349 s, 7.0 MB/s 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.609 09:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:50.867 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd0", 00:09:50.867 "bdev_name": "Nvme0n1" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd1", 00:09:50.867 "bdev_name": "Nvme1n1" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd10", 00:09:50.867 "bdev_name": "Nvme2n1" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd11", 00:09:50.867 "bdev_name": "Nvme2n2" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd12", 00:09:50.867 "bdev_name": "Nvme2n3" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd13", 00:09:50.867 "bdev_name": "Nvme3n1" 00:09:50.867 } 00:09:50.867 ]' 00:09:50.867 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd0", 00:09:50.867 "bdev_name": "Nvme0n1" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd1", 00:09:50.867 "bdev_name": "Nvme1n1" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd10", 00:09:50.867 "bdev_name": "Nvme2n1" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd11", 00:09:50.867 "bdev_name": "Nvme2n2" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd12", 00:09:50.867 "bdev_name": "Nvme2n3" 00:09:50.867 }, 00:09:50.867 { 00:09:50.867 "nbd_device": "/dev/nbd13", 00:09:50.867 "bdev_name": "Nvme3n1" 00:09:50.867 } 00:09:50.867 ]' 00:09:50.867 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:51.126 /dev/nbd1 00:09:51.126 /dev/nbd10 00:09:51.126 /dev/nbd11 00:09:51.126 /dev/nbd12 00:09:51.126 /dev/nbd13' 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:51.126 /dev/nbd1 00:09:51.126 /dev/nbd10 00:09:51.126 /dev/nbd11 00:09:51.126 /dev/nbd12 00:09:51.126 /dev/nbd13' 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:51.126 256+0 records in 00:09:51.126 256+0 records out 00:09:51.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664822 s, 158 MB/s 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:51.126 256+0 records in 00:09:51.126 256+0 records out 00:09:51.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112161 s, 9.3 MB/s 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.126 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:51.384 256+0 records in 00:09:51.384 256+0 records out 00:09:51.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145411 s, 7.2 MB/s 00:09:51.384 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.384 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:51.384 256+0 records in 00:09:51.384 256+0 records out 00:09:51.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119477 s, 8.8 MB/s 00:09:51.384 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.384 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:51.642 256+0 records in 00:09:51.642 256+0 records out 00:09:51.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120162 s, 8.7 MB/s 00:09:51.642 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.642 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:51.642 256+0 records in 00:09:51.642 256+0 records out 00:09:51.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152376 s, 6.9 MB/s 00:09:51.643 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.643 09:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:51.902 256+0 records in 00:09:51.902 256+0 records out 00:09:51.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144691 s, 7.2 MB/s 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.902 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.160 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.419 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.678 09:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.245 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.812 09:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:54.071 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:54.330 malloc_lvol_verify 00:09:54.330 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:54.588 8e38692a-f80a-45ff-bc4a-73708fb9c60f 00:09:54.588 09:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:54.846 892847d5-2681-4a3d-a8fd-775e8b25f053 00:09:54.846 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:55.105 /dev/nbd0 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:55.105 mke2fs 1.46.5 (30-Dec-2021) 00:09:55.105 Discarding device blocks: 0/4096 done 00:09:55.105 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:55.105 00:09:55.105 Allocating group tables: 0/1 done 00:09:55.105 Writing inode tables: 0/1 done 00:09:55.105 Creating journal (1024 blocks): done 00:09:55.105 Writing superblocks and filesystem accounting information: 0/1 done 00:09:55.105 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.105 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66767 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66767 ']' 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66767 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66767 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:55.363 killing process with pid 66767 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66767' 00:09:55.363 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66767 00:09:55.364 09:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66767 00:09:56.738 09:15:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:56.738 00:09:56.738 real 0m13.044s 00:09:56.738 user 0m18.674s 00:09:56.738 sys 0m3.987s 00:09:56.738 09:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.738 09:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:56.738 ************************************ 00:09:56.738 END TEST bdev_nbd 00:09:56.738 ************************************ 00:09:56.738 09:15:42 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:56.738 09:15:42 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:56.738 09:15:42 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:56.738 skipping fio tests on NVMe due to multi-ns failures. 00:09:56.738 09:15:42 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:56.738 09:15:42 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:56.738 09:15:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:56.738 09:15:42 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:56.738 09:15:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.738 09:15:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.738 ************************************ 00:09:56.738 START TEST bdev_verify 00:09:56.738 ************************************ 00:09:56.738 09:15:42 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:56.738 [2024-07-12 09:15:43.051467] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:56.738 [2024-07-12 09:15:43.051659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67170 ] 00:09:57.003 [2024-07-12 09:15:43.225800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:57.261 [2024-07-12 09:15:43.484491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.261 [2024-07-12 09:15:43.484492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.232 Running I/O for 5 seconds... 00:10:03.493 00:10:03.493 Latency(us) 00:10:03.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.493 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0x0 length 0xbd0bd 00:10:03.493 Nvme0n1 : 5.09 1532.89 5.99 0.00 0.00 83316.82 16562.73 85315.96 00:10:03.493 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:03.493 Nvme0n1 : 5.09 1533.39 5.99 0.00 0.00 83281.70 14775.39 91035.46 00:10:03.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0x0 length 0xa0000 00:10:03.493 Nvme1n1 : 5.10 1531.78 5.98 0.00 0.00 83229.92 18826.71 85315.96 00:10:03.493 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0xa0000 length 0xa0000 00:10:03.493 Nvme1n1 : 5.09 1532.86 5.99 0.00 0.00 83128.47 15966.95 84362.71 00:10:03.493 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0x0 length 0x80000 00:10:03.493 Nvme2n1 : 5.10 1530.69 5.98 0.00 0.00 83128.92 20494.89 86269.21 00:10:03.493 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0x80000 length 0x80000 00:10:03.493 Nvme2n1 : 5.10 1531.79 5.98 0.00 0.00 82993.61 16920.20 81026.33 00:10:03.493 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:03.493 Verification LBA range: start 0x0 length 0x80000 00:10:03.494 Nvme2n2 : 5.10 1529.67 5.98 0.00 0.00 83030.51 19660.80 85792.58 00:10:03.494 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:03.494 Verification LBA range: start 0x80000 length 0x80000 00:10:03.494 Nvme2n2 : 5.10 1530.74 5.98 0.00 0.00 82864.44 18707.55 82456.20 00:10:03.494 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:03.494 Verification LBA range: start 0x0 length 0x80000 00:10:03.494 Nvme2n3 : 5.11 1528.88 5.97 0.00 0.00 82924.88 17992.61 86269.21 00:10:03.494 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:03.494 Verification LBA range: start 0x80000 length 0x80000 00:10:03.494 Nvme2n3 : 5.10 1529.82 5.98 0.00 0.00 82757.75 17158.52 84839.33 00:10:03.494 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:03.494 Verification LBA range: start 0x0 length 0x20000 00:10:03.494 Nvme3n1 : 5.11 1528.23 5.97 0.00 0.00 82793.78 15132.86 85315.96 00:10:03.494 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:03.494 Verification LBA range: start 0x20000 length 0x20000 00:10:03.494 Nvme3n1 : 5.11 1529.13 5.97 0.00 0.00 82663.51 12213.53 90558.84 00:10:03.494 =================================================================================================================== 00:10:03.494 Total : 18369.87 71.76 0.00 0.00 83009.53 12213.53 91035.46 00:10:04.867 00:10:04.867 real 0m7.874s 00:10:04.867 user 0m14.310s 00:10:04.867 sys 0m0.259s 00:10:04.867 09:15:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.867 ************************************ 00:10:04.867 09:15:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:04.867 END TEST bdev_verify 00:10:04.867 ************************************ 00:10:04.867 09:15:50 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:04.867 09:15:50 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:04.867 09:15:50 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:04.867 09:15:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.867 09:15:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.867 ************************************ 00:10:04.867 START TEST bdev_verify_big_io 00:10:04.867 ************************************ 00:10:04.867 09:15:50 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:04.867 [2024-07-12 09:15:50.971339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:04.867 [2024-07-12 09:15:50.971521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67274 ] 00:10:04.867 [2024-07-12 09:15:51.142756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.124 [2024-07-12 09:15:51.336547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.124 [2024-07-12 09:15:51.336571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.058 Running I/O for 5 seconds... 00:10:12.616 00:10:12.616 Latency(us) 00:10:12.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.616 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x0 length 0xbd0b 00:10:12.616 Nvme0n1 : 5.72 128.76 8.05 0.00 0.00 958605.01 23235.49 1014258.97 00:10:12.616 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:12.616 Nvme0n1 : 5.66 107.40 6.71 0.00 0.00 1148547.90 15490.33 1288795.23 00:10:12.616 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x0 length 0xa000 00:10:12.616 Nvme1n1 : 5.83 131.68 8.23 0.00 0.00 911761.22 100091.35 819795.78 00:10:12.616 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0xa000 length 0xa000 00:10:12.616 Nvme1n1 : 5.66 107.35 6.71 0.00 0.00 1101037.33 121062.87 1090519.04 00:10:12.616 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x0 length 0x8000 00:10:12.616 Nvme2n1 : 5.84 131.58 8.22 0.00 0.00 882678.07 111530.36 884616.84 00:10:12.616 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x8000 length 0x8000 00:10:12.616 Nvme2n1 : 5.90 112.16 7.01 0.00 0.00 1004987.81 116296.61 926559.88 00:10:12.616 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x0 length 0x8000 00:10:12.616 Nvme2n2 : 5.90 134.56 8.41 0.00 0.00 839708.40 59101.56 850299.81 00:10:12.616 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x8000 length 0x8000 00:10:12.616 Nvme2n2 : 6.00 115.38 7.21 0.00 0.00 948469.67 30742.34 2089525.99 00:10:12.616 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x0 length 0x8000 00:10:12.616 Nvme2n3 : 5.91 140.74 8.80 0.00 0.00 788416.20 6881.28 808356.77 00:10:12.616 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x8000 length 0x8000 00:10:12.616 Nvme2n3 : 6.02 124.43 7.78 0.00 0.00 852740.13 17754.30 2135282.04 00:10:12.616 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x0 length 0x2000 00:10:12.616 Nvme3n1 : 5.93 146.49 9.16 0.00 0.00 737203.59 6881.28 884616.84 00:10:12.616 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:12.616 Verification LBA range: start 0x2000 length 0x2000 00:10:12.616 Nvme3n1 : 6.16 151.18 9.45 0.00 0.00 684753.21 897.40 2150534.05 00:10:12.616 =================================================================================================================== 00:10:12.616 Total : 1531.71 95.73 0.00 0.00 888625.16 897.40 2150534.05 00:10:13.990 00:10:13.990 real 0m9.238s 00:10:13.990 user 0m17.039s 00:10:13.990 sys 0m0.298s 00:10:13.990 09:16:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.990 ************************************ 00:10:13.990 END TEST bdev_verify_big_io 00:10:13.990 09:16:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:13.990 ************************************ 00:10:13.990 09:16:00 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:13.990 09:16:00 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:13.990 09:16:00 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:13.990 09:16:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.990 09:16:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:13.990 ************************************ 00:10:13.990 START TEST bdev_write_zeroes 00:10:13.990 ************************************ 00:10:13.990 09:16:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:13.990 [2024-07-12 09:16:00.257283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:13.990 [2024-07-12 09:16:00.257460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67394 ] 00:10:14.248 [2024-07-12 09:16:00.428688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.507 [2024-07-12 09:16:00.719368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.073 Running I/O for 1 seconds... 00:10:16.447 00:10:16.447 Latency(us) 00:10:16.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.447 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.447 Nvme0n1 : 1.04 8053.40 31.46 0.00 0.00 15742.92 11081.54 32172.22 00:10:16.447 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.447 Nvme1n1 : 1.03 8174.74 31.93 0.00 0.00 15558.92 11439.01 30980.65 00:10:16.447 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.447 Nvme2n1 : 1.05 8078.98 31.56 0.00 0.00 15623.55 11319.85 30980.65 00:10:16.447 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.447 Nvme2n2 : 1.03 8136.48 31.78 0.00 0.00 15549.40 11260.28 30742.34 00:10:16.447 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.447 Nvme2n3 : 1.03 8107.11 31.67 0.00 0.00 15567.59 9472.93 30980.65 00:10:16.447 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:16.447 Nvme3n1 : 1.04 8077.44 31.55 0.00 0.00 15565.85 11319.85 30980.65 00:10:16.447 =================================================================================================================== 00:10:16.447 Total : 48628.15 189.95 0.00 0.00 15601.42 9472.93 32172.22 00:10:17.380 00:10:17.380 real 0m3.431s 00:10:17.380 user 0m3.071s 00:10:17.380 sys 0m0.236s 00:10:17.380 09:16:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.380 ************************************ 00:10:17.380 09:16:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:17.380 END TEST bdev_write_zeroes 00:10:17.380 ************************************ 00:10:17.380 09:16:03 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:17.380 09:16:03 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.380 09:16:03 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:17.380 09:16:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.380 09:16:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.380 ************************************ 00:10:17.380 START TEST bdev_json_nonenclosed 00:10:17.380 ************************************ 00:10:17.380 09:16:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.637 [2024-07-12 09:16:03.747427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:17.637 [2024-07-12 09:16:03.747611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67453 ] 00:10:17.637 [2024-07-12 09:16:03.924381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.895 [2024-07-12 09:16:04.197113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.895 [2024-07-12 09:16:04.197257] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:17.895 [2024-07-12 09:16:04.197298] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:17.895 [2024-07-12 09:16:04.197322] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.460 00:10:18.460 real 0m1.080s 00:10:18.460 user 0m0.829s 00:10:18.460 sys 0m0.141s 00:10:18.460 09:16:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:10:18.460 09:16:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.460 09:16:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:18.460 ************************************ 00:10:18.460 END TEST bdev_json_nonenclosed 00:10:18.460 ************************************ 00:10:18.460 09:16:04 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:10:18.460 09:16:04 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:10:18.461 09:16:04 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:18.461 09:16:04 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:18.461 09:16:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.461 09:16:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.461 ************************************ 00:10:18.461 START TEST bdev_json_nonarray 00:10:18.461 ************************************ 00:10:18.461 09:16:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:18.719 [2024-07-12 09:16:04.853941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:18.719 [2024-07-12 09:16:04.854105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67484 ] 00:10:18.719 [2024-07-12 09:16:05.017394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.977 [2024-07-12 09:16:05.206442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.977 [2024-07-12 09:16:05.206561] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:18.977 [2024-07-12 09:16:05.206588] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:18.977 [2024-07-12 09:16:05.206605] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.544 00:10:19.544 real 0m0.860s 00:10:19.544 user 0m0.623s 00:10:19.544 sys 0m0.130s 00:10:19.544 09:16:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:10:19.544 09:16:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.544 09:16:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:19.544 ************************************ 00:10:19.544 END TEST bdev_json_nonarray 00:10:19.544 ************************************ 00:10:19.544 09:16:05 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:19.544 09:16:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:19.544 00:10:19.544 real 0m44.797s 00:10:19.544 user 1m7.580s 00:10:19.544 sys 0m6.372s 00:10:19.544 09:16:05 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.544 09:16:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.544 ************************************ 00:10:19.544 END TEST blockdev_nvme 00:10:19.544 ************************************ 00:10:19.544 09:16:05 -- common/autotest_common.sh@1142 -- # return 0 00:10:19.544 09:16:05 -- spdk/autotest.sh@213 -- # uname -s 00:10:19.544 09:16:05 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:10:19.544 09:16:05 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:19.544 09:16:05 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.544 09:16:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.544 09:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:19.544 ************************************ 00:10:19.544 START TEST blockdev_nvme_gpt 00:10:19.544 ************************************ 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:19.544 * Looking for test storage... 00:10:19.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67560 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67560 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67560 ']' 00:10:19.544 09:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.544 09:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:19.803 [2024-07-12 09:16:05.917030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:19.803 [2024-07-12 09:16:05.917780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67560 ] 00:10:19.803 [2024-07-12 09:16:06.083567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.062 [2024-07-12 09:16:06.271253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.998 09:16:06 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.998 09:16:06 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:10:20.998 09:16:06 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:10:20.998 09:16:06 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:10:20.998 09:16:06 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:20.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:21.257 Waiting for block devices as requested 00:10:21.257 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.257 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.514 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.514 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.779 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:26.779 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:26.779 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:26.779 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:10:26.780 BYT; 00:10:26.780 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:10:26.780 BYT; 00:10:26.780 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:26.780 09:16:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:26.780 09:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:10:27.714 The operation has completed successfully. 00:10:27.714 09:16:13 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:10:28.649 The operation has completed successfully. 00:10:28.649 09:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:29.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:29.781 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.781 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.781 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.781 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.040 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:10:30.040 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.040 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.040 [] 00:10:30.040 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.040 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:10:30.040 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:30.040 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:30.040 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:30.040 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:30.040 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.040 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:10:30.299 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.299 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.558 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.558 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:10:30.558 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:10:30.559 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "189e1c38-cb06-46a9-a118-594fa66b0eb6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "189e1c38-cb06-46a9-a118-594fa66b0eb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "74520b65-76bb-411c-839a-1ecf539ea5fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74520b65-76bb-411c-839a-1ecf539ea5fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "22d99e32-bf81-42be-b84e-ab9c0e767ee0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "22d99e32-bf81-42be-b84e-ab9c0e767ee0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "52dbd268-37f3-4d7b-92c5-e1bd7b27eb43"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52dbd268-37f3-4d7b-92c5-e1bd7b27eb43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3fe60c88-a571-4848-ba54-a5683a01a8c9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3fe60c88-a571-4848-ba54-a5683a01a8c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:30.559 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:10:30.559 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:10:30.559 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:10:30.559 09:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 67560 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67560 ']' 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67560 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67560 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67560' 00:10:30.559 killing process with pid 67560 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67560 00:10:30.559 09:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67560 00:10:33.089 09:16:18 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:33.089 09:16:18 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:33.089 09:16:18 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:33.089 09:16:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.089 09:16:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:33.089 ************************************ 00:10:33.089 START TEST bdev_hello_world 00:10:33.089 ************************************ 00:10:33.089 09:16:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:33.089 [2024-07-12 09:16:19.061683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:33.089 [2024-07-12 09:16:19.061908] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68192 ] 00:10:33.089 [2024-07-12 09:16:19.251314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.348 [2024-07-12 09:16:19.480212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.915 [2024-07-12 09:16:20.086980] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:33.915 [2024-07-12 09:16:20.087044] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:33.915 [2024-07-12 09:16:20.087076] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:33.915 [2024-07-12 09:16:20.090067] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:33.915 [2024-07-12 09:16:20.090569] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:33.915 [2024-07-12 09:16:20.090612] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:33.915 [2024-07-12 09:16:20.090842] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:33.915 00:10:33.915 [2024-07-12 09:16:20.090888] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:35.291 00:10:35.291 real 0m2.241s 00:10:35.291 user 0m1.918s 00:10:35.291 sys 0m0.210s 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:35.291 ************************************ 00:10:35.291 END TEST bdev_hello_world 00:10:35.291 ************************************ 00:10:35.291 09:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:35.291 09:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:10:35.291 09:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.291 09:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.291 09:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:35.291 ************************************ 00:10:35.291 START TEST bdev_bounds 00:10:35.291 ************************************ 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68234 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:35.291 Process bdevio pid: 68234 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68234' 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68234 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68234 ']' 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.291 09:16:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:35.291 [2024-07-12 09:16:21.348763] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:35.291 [2024-07-12 09:16:21.348941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68234 ] 00:10:35.291 [2024-07-12 09:16:21.520732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:35.550 [2024-07-12 09:16:21.715821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.550 [2024-07-12 09:16:21.715940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.550 [2024-07-12 09:16:21.715965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.115 09:16:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.115 09:16:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:10:36.115 09:16:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:36.373 I/O targets: 00:10:36.373 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:36.373 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:36.373 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:36.373 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:36.373 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:36.373 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:36.373 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:36.373 00:10:36.373 00:10:36.373 CUnit - A unit testing framework for C - Version 2.1-3 00:10:36.373 http://cunit.sourceforge.net/ 00:10:36.373 00:10:36.373 00:10:36.373 Suite: bdevio tests on: Nvme3n1 00:10:36.373 Test: blockdev write read block ...passed 00:10:36.373 Test: blockdev write zeroes read block ...passed 00:10:36.373 Test: blockdev write zeroes read no split ...passed 00:10:36.373 Test: blockdev write zeroes read split ...passed 00:10:36.373 Test: blockdev write zeroes read split partial ...passed 00:10:36.373 Test: blockdev reset ...[2024-07-12 09:16:22.554464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:36.373 [2024-07-12 09:16:22.558402] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.373 passed 00:10:36.373 Test: blockdev write read 8 blocks ...passed 00:10:36.373 Test: blockdev write read size > 128k ...passed 00:10:36.373 Test: blockdev write read invalid size ...passed 00:10:36.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.373 Test: blockdev write read max offset ...passed 00:10:36.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.373 Test: blockdev writev readv 8 blocks ...passed 00:10:36.373 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.373 Test: blockdev writev readv block ...passed 00:10:36.373 Test: blockdev writev readv size > 128k ...passed 00:10:36.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.373 Test: blockdev comparev and writev ...[2024-07-12 09:16:22.565976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26d204000 len:0x1000 00:10:36.373 [2024-07-12 09:16:22.566144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:36.373 passed 00:10:36.373 Test: blockdev nvme passthru rw ...passed 00:10:36.373 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:16:22.567125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:36.373 [2024-07-12 09:16:22.567275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:36.373 passed 00:10:36.373 Test: blockdev nvme admin passthru ...passed 00:10:36.373 Test: blockdev copy ...passed 00:10:36.373 Suite: bdevio tests on: Nvme2n3 00:10:36.373 Test: blockdev write read block ...passed 00:10:36.373 Test: blockdev write zeroes read block ...passed 00:10:36.373 Test: blockdev write zeroes read no split ...passed 00:10:36.373 Test: blockdev write zeroes read split ...passed 00:10:36.373 Test: blockdev write zeroes read split partial ...passed 00:10:36.373 Test: blockdev reset ...[2024-07-12 09:16:22.632361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:36.373 [2024-07-12 09:16:22.636599] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.373 passed 00:10:36.373 Test: blockdev write read 8 blocks ...passed 00:10:36.373 Test: blockdev write read size > 128k ...passed 00:10:36.373 Test: blockdev write read invalid size ...passed 00:10:36.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.373 Test: blockdev write read max offset ...passed 00:10:36.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.373 Test: blockdev writev readv 8 blocks ...passed 00:10:36.373 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.373 Test: blockdev writev readv block ...passed 00:10:36.373 Test: blockdev writev readv size > 128k ...passed 00:10:36.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.373 Test: blockdev comparev and writev ...[2024-07-12 09:16:22.644702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28663a000 len:0x1000 00:10:36.373 [2024-07-12 09:16:22.644875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:36.373 passed 00:10:36.373 Test: blockdev nvme passthru rw ...passed 00:10:36.373 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:16:22.645713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:36.373 [2024-07-12 09:16:22.645757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:36.373 passed 00:10:36.373 Test: blockdev nvme admin passthru ...passed 00:10:36.373 Test: blockdev copy ...passed 00:10:36.373 Suite: bdevio tests on: Nvme2n2 00:10:36.373 Test: blockdev write read block ...passed 00:10:36.373 Test: blockdev write zeroes read block ...passed 00:10:36.373 Test: blockdev write zeroes read no split ...passed 00:10:36.373 Test: blockdev write zeroes read split ...passed 00:10:36.373 Test: blockdev write zeroes read split partial ...passed 00:10:36.373 Test: blockdev reset ...[2024-07-12 09:16:22.711115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:36.373 [2024-07-12 09:16:22.715493] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.373 passed 00:10:36.373 Test: blockdev write read 8 blocks ...passed 00:10:36.373 Test: blockdev write read size > 128k ...passed 00:10:36.373 Test: blockdev write read invalid size ...passed 00:10:36.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.373 Test: blockdev write read max offset ...passed 00:10:36.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.373 Test: blockdev writev readv 8 blocks ...passed 00:10:36.373 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.373 Test: blockdev writev readv block ...passed 00:10:36.373 Test: blockdev writev readv size > 128k ...passed 00:10:36.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.631 Test: blockdev comparev and writev ...[2024-07-12 09:16:22.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x286636000 len:0x1000 00:10:36.631 [2024-07-12 09:16:22.723114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:36.631 passed 00:10:36.631 Test: blockdev nvme passthru rw ...passed 00:10:36.631 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:16:22.724077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:36.631 [2024-07-12 09:16:22.724119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:36.631 passed 00:10:36.631 Test: blockdev nvme admin passthru ...passed 00:10:36.631 Test: blockdev copy ...passed 00:10:36.631 Suite: bdevio tests on: Nvme2n1 00:10:36.631 Test: blockdev write read block ...passed 00:10:36.631 Test: blockdev write zeroes read block ...passed 00:10:36.631 Test: blockdev write zeroes read no split ...passed 00:10:36.632 Test: blockdev write zeroes read split ...passed 00:10:36.632 Test: blockdev write zeroes read split partial ...passed 00:10:36.632 Test: blockdev reset ...[2024-07-12 09:16:22.797046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:36.632 [2024-07-12 09:16:22.802807] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.632 passed 00:10:36.632 Test: blockdev write read 8 blocks ...passed 00:10:36.632 Test: blockdev write read size > 128k ...passed 00:10:36.632 Test: blockdev write read invalid size ...passed 00:10:36.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.632 Test: blockdev write read max offset ...passed 00:10:36.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.632 Test: blockdev writev readv 8 blocks ...passed 00:10:36.632 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.632 Test: blockdev writev readv block ...passed 00:10:36.632 Test: blockdev writev readv size > 128k ...passed 00:10:36.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.632 Test: blockdev comparev and writev ...[2024-07-12 09:16:22.810386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x286630000 len:0x1000 00:10:36.632 [2024-07-12 09:16:22.810449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:36.632 passed 00:10:36.632 Test: blockdev nvme passthru rw ...passed 00:10:36.632 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:16:22.811573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:36.632 [2024-07-12 09:16:22.811616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:36.632 passed 00:10:36.632 Test: blockdev nvme admin passthru ...passed 00:10:36.632 Test: blockdev copy ...passed 00:10:36.632 Suite: bdevio tests on: Nvme1n1 00:10:36.632 Test: blockdev write read block ...passed 00:10:36.632 Test: blockdev write zeroes read block ...passed 00:10:36.632 Test: blockdev write zeroes read no split ...passed 00:10:36.632 Test: blockdev write zeroes read split ...passed 00:10:36.632 Test: blockdev write zeroes read split partial ...passed 00:10:36.632 Test: blockdev reset ...[2024-07-12 09:16:22.876276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:36.632 [2024-07-12 09:16:22.880472] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.632 passed 00:10:36.632 Test: blockdev write read 8 blocks ...passed 00:10:36.632 Test: blockdev write read size > 128k ...passed 00:10:36.632 Test: blockdev write read invalid size ...passed 00:10:36.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.632 Test: blockdev write read max offset ...passed 00:10:36.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.632 Test: blockdev writev readv 8 blocks ...passed 00:10:36.632 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.632 Test: blockdev writev readv block ...passed 00:10:36.632 Test: blockdev writev readv size > 128k ...passed 00:10:36.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.632 Test: blockdev comparev and writev ...[2024-07-12 09:16:22.888300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274e0e000 len:0x1000 00:10:36.632 [2024-07-12 09:16:22.888356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:36.632 passed 00:10:36.632 Test: blockdev nvme passthru rw ...passed 00:10:36.632 Test: blockdev nvme passthru vendor specific ...[2024-07-12 09:16:22.889235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:36.632 [2024-07-12 09:16:22.889277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:36.632 passed 00:10:36.632 Test: blockdev nvme admin passthru ...passed 00:10:36.632 Test: blockdev copy ...passed 00:10:36.632 Suite: bdevio tests on: Nvme0n1p2 00:10:36.632 Test: blockdev write read block ...passed 00:10:36.632 Test: blockdev write zeroes read block ...passed 00:10:36.632 Test: blockdev write zeroes read no split ...passed 00:10:36.632 Test: blockdev write zeroes read split ...passed 00:10:36.632 Test: blockdev write zeroes read split partial ...passed 00:10:36.632 Test: blockdev reset ...[2024-07-12 09:16:22.959862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:36.632 [2024-07-12 09:16:22.963638] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.632 passed 00:10:36.632 Test: blockdev write read 8 blocks ...passed 00:10:36.632 Test: blockdev write read size > 128k ...passed 00:10:36.632 Test: blockdev write read invalid size ...passed 00:10:36.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.632 Test: blockdev write read max offset ...passed 00:10:36.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.632 Test: blockdev writev readv 8 blocks ...passed 00:10:36.632 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.632 Test: blockdev writev readv block ...passed 00:10:36.632 Test: blockdev writev readv size > 128k ...passed 00:10:36.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.632 Test: blockdev comparev and writev ...passed 00:10:36.632 Test: blockdev nvme passthru rw ...passed 00:10:36.632 Test: blockdev nvme passthru vendor specific ...passed 00:10:36.632 Test: blockdev nvme admin passthru ...passed 00:10:36.632 Test: blockdev copy ...[2024-07-12 09:16:22.970891] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:36.632 separate metadata which is not supported yet. 00:10:36.632 passed 00:10:36.632 Suite: bdevio tests on: Nvme0n1p1 00:10:36.632 Test: blockdev write read block ...passed 00:10:36.632 Test: blockdev write zeroes read block ...passed 00:10:36.632 Test: blockdev write zeroes read no split ...passed 00:10:36.889 Test: blockdev write zeroes read split ...passed 00:10:36.889 Test: blockdev write zeroes read split partial ...passed 00:10:36.889 Test: blockdev reset ...[2024-07-12 09:16:23.037227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:36.889 [2024-07-12 09:16:23.040838] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:36.889 passed 00:10:36.889 Test: blockdev write read 8 blocks ...passed 00:10:36.889 Test: blockdev write read size > 128k ...passed 00:10:36.889 Test: blockdev write read invalid size ...passed 00:10:36.889 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:36.889 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:36.889 Test: blockdev write read max offset ...passed 00:10:36.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:36.889 Test: blockdev writev readv 8 blocks ...passed 00:10:36.889 Test: blockdev writev readv 30 x 1block ...passed 00:10:36.889 Test: blockdev writev readv block ...passed 00:10:36.889 Test: blockdev writev readv size > 128k ...passed 00:10:36.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:36.889 Test: blockdev comparev and writev ...passed 00:10:36.889 Test: blockdev nvme passthru rw ...passed 00:10:36.889 Test: blockdev nvme passthru vendor specific ...passed 00:10:36.889 Test: blockdev nvme admin passthru ...passed 00:10:36.889 Test: blockdev copy ...[2024-07-12 09:16:23.047899] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:36.889 separate metadata which is not supported yet. 00:10:36.889 passed 00:10:36.889 00:10:36.889 Run Summary: Type Total Ran Passed Failed Inactive 00:10:36.889 suites 7 7 n/a 0 0 00:10:36.889 tests 161 161 161 0 0 00:10:36.889 asserts 1006 1006 1006 0 n/a 00:10:36.889 00:10:36.889 Elapsed time = 1.511 seconds 00:10:36.889 0 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68234 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68234 ']' 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68234 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68234 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.889 killing process with pid 68234 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68234' 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68234 00:10:36.889 09:16:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68234 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:10:37.827 00:10:37.827 real 0m2.845s 00:10:37.827 user 0m7.002s 00:10:37.827 sys 0m0.345s 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:37.827 ************************************ 00:10:37.827 END TEST bdev_bounds 00:10:37.827 ************************************ 00:10:37.827 09:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:37.827 09:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:37.827 09:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:37.827 09:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.827 09:16:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:37.827 ************************************ 00:10:37.827 START TEST bdev_nbd 00:10:37.827 ************************************ 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=68299 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 68299 /var/tmp/spdk-nbd.sock 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68299 ']' 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.827 09:16:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:38.100 [2024-07-12 09:16:24.256260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:38.100 [2024-07-12 09:16:24.256434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.100 [2024-07-12 09:16:24.427131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.357 [2024-07-12 09:16:24.618589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:38.923 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.490 1+0 records in 00:10:39.490 1+0 records out 00:10:39.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496298 s, 8.3 MB/s 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:39.490 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.748 1+0 records in 00:10:39.748 1+0 records out 00:10:39.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686756 s, 6.0 MB/s 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:39.748 09:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:40.006 1+0 records in 00:10:40.006 1+0 records out 00:10:40.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544423 s, 7.5 MB/s 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:40.006 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:40.265 1+0 records in 00:10:40.265 1+0 records out 00:10:40.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693864 s, 5.9 MB/s 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:40.265 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:40.523 1+0 records in 00:10:40.523 1+0 records out 00:10:40.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593758 s, 6.9 MB/s 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:40.523 09:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:40.781 1+0 records in 00:10:40.781 1+0 records out 00:10:40.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788622 s, 5.2 MB/s 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:40.781 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.040 1+0 records in 00:10:41.040 1+0 records out 00:10:41.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714848 s, 5.7 MB/s 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:41.040 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd0", 00:10:41.299 "bdev_name": "Nvme0n1p1" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd1", 00:10:41.299 "bdev_name": "Nvme0n1p2" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd2", 00:10:41.299 "bdev_name": "Nvme1n1" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd3", 00:10:41.299 "bdev_name": "Nvme2n1" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd4", 00:10:41.299 "bdev_name": "Nvme2n2" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd5", 00:10:41.299 "bdev_name": "Nvme2n3" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd6", 00:10:41.299 "bdev_name": "Nvme3n1" 00:10:41.299 } 00:10:41.299 ]' 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd0", 00:10:41.299 "bdev_name": "Nvme0n1p1" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd1", 00:10:41.299 "bdev_name": "Nvme0n1p2" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd2", 00:10:41.299 "bdev_name": "Nvme1n1" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd3", 00:10:41.299 "bdev_name": "Nvme2n1" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd4", 00:10:41.299 "bdev_name": "Nvme2n2" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd5", 00:10:41.299 "bdev_name": "Nvme2n3" 00:10:41.299 }, 00:10:41.299 { 00:10:41.299 "nbd_device": "/dev/nbd6", 00:10:41.299 "bdev_name": "Nvme3n1" 00:10:41.299 } 00:10:41.299 ]' 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.299 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.878 09:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.878 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.445 09:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:42.703 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:42.703 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:42.704 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:42.704 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.704 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.704 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:42.962 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:42.962 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.962 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.962 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.221 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:43.480 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:43.740 09:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:43.998 /dev/nbd0 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.998 1+0 records in 00:10:43.998 1+0 records out 00:10:43.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049394 s, 8.3 MB/s 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:43.998 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:44.257 /dev/nbd1 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.257 1+0 records in 00:10:44.257 1+0 records out 00:10:44.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604065 s, 6.8 MB/s 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:44.257 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:44.516 /dev/nbd10 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.516 1+0 records in 00:10:44.516 1+0 records out 00:10:44.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066982 s, 6.1 MB/s 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:44.516 09:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:44.789 /dev/nbd11 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:44.789 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.790 1+0 records in 00:10:44.790 1+0 records out 00:10:44.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576953 s, 7.1 MB/s 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:44.790 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:45.074 /dev/nbd12 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.074 1+0 records in 00:10:45.074 1+0 records out 00:10:45.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803644 s, 5.1 MB/s 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:45.074 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:45.332 /dev/nbd13 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.332 1+0 records in 00:10:45.332 1+0 records out 00:10:45.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806778 s, 5.1 MB/s 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:45.332 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:45.590 /dev/nbd14 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:45.590 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:10:45.848 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:10:45.848 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:45.848 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.849 1+0 records in 00:10:45.849 1+0 records out 00:10:45.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108932 s, 3.8 MB/s 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.849 09:16:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd0", 00:10:46.106 "bdev_name": "Nvme0n1p1" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd1", 00:10:46.106 "bdev_name": "Nvme0n1p2" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd10", 00:10:46.106 "bdev_name": "Nvme1n1" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd11", 00:10:46.106 "bdev_name": "Nvme2n1" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd12", 00:10:46.106 "bdev_name": "Nvme2n2" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd13", 00:10:46.106 "bdev_name": "Nvme2n3" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd14", 00:10:46.106 "bdev_name": "Nvme3n1" 00:10:46.106 } 00:10:46.106 ]' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd0", 00:10:46.106 "bdev_name": "Nvme0n1p1" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd1", 00:10:46.106 "bdev_name": "Nvme0n1p2" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd10", 00:10:46.106 "bdev_name": "Nvme1n1" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd11", 00:10:46.106 "bdev_name": "Nvme2n1" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd12", 00:10:46.106 "bdev_name": "Nvme2n2" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd13", 00:10:46.106 "bdev_name": "Nvme2n3" 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "nbd_device": "/dev/nbd14", 00:10:46.106 "bdev_name": "Nvme3n1" 00:10:46.106 } 00:10:46.106 ]' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:46.106 /dev/nbd1 00:10:46.106 /dev/nbd10 00:10:46.106 /dev/nbd11 00:10:46.106 /dev/nbd12 00:10:46.106 /dev/nbd13 00:10:46.106 /dev/nbd14' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:46.106 /dev/nbd1 00:10:46.106 /dev/nbd10 00:10:46.106 /dev/nbd11 00:10:46.106 /dev/nbd12 00:10:46.106 /dev/nbd13 00:10:46.106 /dev/nbd14' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:46.106 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:46.107 256+0 records in 00:10:46.107 256+0 records out 00:10:46.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00854363 s, 123 MB/s 00:10:46.107 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.107 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:46.107 256+0 records in 00:10:46.107 256+0 records out 00:10:46.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15702 s, 6.7 MB/s 00:10:46.107 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.107 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:46.363 256+0 records in 00:10:46.363 256+0 records out 00:10:46.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145598 s, 7.2 MB/s 00:10:46.363 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.363 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:46.620 256+0 records in 00:10:46.620 256+0 records out 00:10:46.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16232 s, 6.5 MB/s 00:10:46.620 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.620 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:46.620 256+0 records in 00:10:46.620 256+0 records out 00:10:46.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173437 s, 6.0 MB/s 00:10:46.620 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.620 09:16:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:46.878 256+0 records in 00:10:46.878 256+0 records out 00:10:46.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154938 s, 6.8 MB/s 00:10:46.878 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.878 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:47.136 256+0 records in 00:10:47.136 256+0 records out 00:10:47.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177742 s, 5.9 MB/s 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:47.136 256+0 records in 00:10:47.136 256+0 records out 00:10:47.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17111 s, 6.1 MB/s 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:47.136 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.137 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:47.394 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.394 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:47.394 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.395 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.653 09:16:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.911 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.169 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.427 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.685 09:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.943 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.202 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:49.460 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:49.717 malloc_lvol_verify 00:10:49.717 09:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:49.976 cddf4b92-1105-45e7-8f4c-e2493ba4f072 00:10:49.976 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:50.234 c9392e88-73bc-493f-8676-e9b9df12e445 00:10:50.234 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:50.492 /dev/nbd0 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:50.492 mke2fs 1.46.5 (30-Dec-2021) 00:10:50.492 Discarding device blocks: 0/4096 done 00:10:50.492 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:50.492 00:10:50.492 Allocating group tables: 0/1 done 00:10:50.492 Writing inode tables: 0/1 done 00:10:50.492 Creating journal (1024 blocks): done 00:10:50.492 Writing superblocks and filesystem accounting information: 0/1 done 00:10:50.492 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:50.492 09:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 68299 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68299 ']' 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68299 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.749 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68299 00:10:51.006 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:51.007 killing process with pid 68299 00:10:51.007 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:51.007 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68299' 00:10:51.007 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68299 00:10:51.007 09:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68299 00:10:51.961 09:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:10:51.961 00:10:51.961 real 0m14.134s 00:10:51.961 user 0m20.134s 00:10:51.961 sys 0m4.512s 00:10:51.961 09:16:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.961 09:16:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:51.961 ************************************ 00:10:51.961 END TEST bdev_nbd 00:10:51.961 ************************************ 00:10:52.220 09:16:38 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:10:52.220 09:16:38 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:52.220 09:16:38 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:10:52.220 09:16:38 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:10:52.220 skipping fio tests on NVMe due to multi-ns failures. 00:10:52.220 09:16:38 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:52.220 09:16:38 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:52.220 09:16:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:52.220 09:16:38 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:10:52.220 09:16:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.220 09:16:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 ************************************ 00:10:52.220 START TEST bdev_verify 00:10:52.220 ************************************ 00:10:52.220 09:16:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:52.220 [2024-07-12 09:16:38.429543] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:52.220 [2024-07-12 09:16:38.429766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68742 ] 00:10:52.478 [2024-07-12 09:16:38.603268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:52.478 [2024-07-12 09:16:38.827204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.478 [2024-07-12 09:16:38.827209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.412 Running I/O for 5 seconds... 00:10:58.680 00:10:58.680 Latency(us) 00:10:58.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.680 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0x5e800 00:10:58.680 Nvme0n1p1 : 5.09 1320.14 5.16 0.00 0.00 96373.76 14656.23 123922.62 00:10:58.680 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x5e800 length 0x5e800 00:10:58.680 Nvme0n1p1 : 5.09 1319.96 5.16 0.00 0.00 96366.12 13226.36 125829.12 00:10:58.680 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0x5e7ff 00:10:58.680 Nvme0n1p2 : 5.09 1319.40 5.15 0.00 0.00 96253.68 14775.39 114390.11 00:10:58.680 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:58.680 Nvme0n1p2 : 5.11 1326.88 5.18 0.00 0.00 95980.44 16801.05 116296.61 00:10:58.680 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0xa0000 00:10:58.680 Nvme1n1 : 5.11 1327.23 5.18 0.00 0.00 95917.37 13881.72 116773.24 00:10:58.680 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0xa0000 length 0xa0000 00:10:58.680 Nvme1n1 : 5.12 1326.12 5.18 0.00 0.00 95841.74 17754.30 120109.61 00:10:58.680 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0x80000 00:10:58.680 Nvme2n1 : 5.11 1326.76 5.18 0.00 0.00 95779.70 13881.72 119156.36 00:10:58.680 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x80000 length 0x80000 00:10:58.680 Nvme2n1 : 5.12 1325.36 5.18 0.00 0.00 95706.55 18469.24 124875.87 00:10:58.680 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0x80000 00:10:58.680 Nvme2n2 : 5.12 1326.00 5.18 0.00 0.00 95645.52 14715.81 122016.12 00:10:58.680 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x80000 length 0x80000 00:10:58.680 Nvme2n2 : 5.12 1324.61 5.17 0.00 0.00 95571.36 19303.33 126782.37 00:10:58.680 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0x80000 00:10:58.680 Nvme2n3 : 5.12 1325.23 5.18 0.00 0.00 95515.17 15490.33 124875.87 00:10:58.680 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x80000 length 0x80000 00:10:58.680 Nvme2n3 : 5.12 1323.90 5.17 0.00 0.00 95441.16 16920.20 127735.62 00:10:58.680 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x0 length 0x20000 00:10:58.680 Nvme3n1 : 5.12 1324.48 5.17 0.00 0.00 95386.40 12213.53 126782.37 00:10:58.680 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:58.680 Verification LBA range: start 0x20000 length 0x20000 00:10:58.680 Nvme3n1 : 5.13 1323.51 5.17 0.00 0.00 95330.28 13941.29 127735.62 00:10:58.680 =================================================================================================================== 00:10:58.680 Total : 18539.58 72.42 0.00 0.00 95792.43 12213.53 127735.62 00:11:00.052 00:11:00.052 real 0m7.739s 00:11:00.052 user 0m14.089s 00:11:00.052 sys 0m0.242s 00:11:00.052 09:16:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.052 09:16:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:00.052 ************************************ 00:11:00.053 END TEST bdev_verify 00:11:00.053 ************************************ 00:11:00.053 09:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:00.053 09:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:00.053 09:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:11:00.053 09:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.053 09:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:00.053 ************************************ 00:11:00.053 START TEST bdev_verify_big_io 00:11:00.053 ************************************ 00:11:00.053 09:16:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:00.053 [2024-07-12 09:16:46.219998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:00.053 [2024-07-12 09:16:46.220239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68842 ] 00:11:00.053 [2024-07-12 09:16:46.384517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.362 [2024-07-12 09:16:46.580734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.362 [2024-07-12 09:16:46.580756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.312 Running I/O for 5 seconds... 00:11:07.869 00:11:07.869 Latency(us) 00:11:07.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.869 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.869 Verification LBA range: start 0x0 length 0x5e80 00:11:07.869 Nvme0n1p1 : 5.90 101.55 6.35 0.00 0.00 1179903.52 19660.80 1296421.24 00:11:07.869 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.869 Verification LBA range: start 0x5e80 length 0x5e80 00:11:07.869 Nvme0n1p1 : 5.78 99.67 6.23 0.00 0.00 1239091.92 20256.58 1204909.15 00:11:07.869 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.869 Verification LBA range: start 0x0 length 0x5e7f 00:11:07.869 Nvme0n1p2 : 5.90 105.73 6.61 0.00 0.00 1124778.02 75783.45 1105771.05 00:11:07.870 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x5e7f length 0x5e7f 00:11:07.870 Nvme0n1p2 : 5.86 103.79 6.49 0.00 0.00 1164938.85 73876.95 1037136.99 00:11:07.870 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x0 length 0xa000 00:11:07.870 Nvme1n1 : 5.99 103.74 6.48 0.00 0.00 1114515.33 81026.33 1776859.69 00:11:07.870 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0xa000 length 0xa000 00:11:07.870 Nvme1n1 : 5.93 102.67 6.42 0.00 0.00 1128282.41 75783.45 1014258.97 00:11:07.870 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x0 length 0x8000 00:11:07.870 Nvme2n1 : 5.99 103.53 6.47 0.00 0.00 1076990.56 81979.58 1792111.71 00:11:07.870 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x8000 length 0x8000 00:11:07.870 Nvme2n1 : 5.86 103.20 6.45 0.00 0.00 1098820.20 76260.07 1395559.33 00:11:07.870 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x0 length 0x8000 00:11:07.870 Nvme2n2 : 6.05 104.79 6.55 0.00 0.00 1031534.01 58863.24 1502323.43 00:11:07.870 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x8000 length 0x8000 00:11:07.870 Nvme2n2 : 5.93 107.85 6.74 0.00 0.00 1023553.91 68157.44 1052389.00 00:11:07.870 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x0 length 0x8000 00:11:07.870 Nvme2n3 : 6.16 114.55 7.16 0.00 0.00 911189.30 17277.67 1548079.48 00:11:07.870 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x8000 length 0x8000 00:11:07.870 Nvme2n3 : 6.00 117.26 7.33 0.00 0.00 916911.18 21090.68 1037136.99 00:11:07.870 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x0 length 0x2000 00:11:07.870 Nvme3n1 : 6.20 147.04 9.19 0.00 0.00 698602.37 908.57 1296421.24 00:11:07.870 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:07.870 Verification LBA range: start 0x2000 length 0x2000 00:11:07.870 Nvme3n1 : 6.12 135.92 8.50 0.00 0.00 770675.39 781.96 1090519.04 00:11:07.870 =================================================================================================================== 00:11:07.870 Total : 1551.28 96.96 0.00 0.00 1013840.83 781.96 1792111.71 00:11:09.244 00:11:09.244 real 0m9.314s 00:11:09.244 user 0m17.241s 00:11:09.244 sys 0m0.271s 00:11:09.244 09:16:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.244 09:16:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.244 ************************************ 00:11:09.244 END TEST bdev_verify_big_io 00:11:09.244 ************************************ 00:11:09.244 09:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:09.244 09:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:09.244 09:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:09.244 09:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.244 09:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.244 ************************************ 00:11:09.244 START TEST bdev_write_zeroes 00:11:09.244 ************************************ 00:11:09.244 09:16:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:09.244 [2024-07-12 09:16:55.569088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:09.244 [2024-07-12 09:16:55.569272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68963 ] 00:11:09.503 [2024-07-12 09:16:55.735816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.761 [2024-07-12 09:16:55.964000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.327 Running I/O for 1 seconds... 00:11:11.701 00:11:11.701 Latency(us) 00:11:11.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.701 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme0n1p1 : 1.02 6638.70 25.93 0.00 0.00 19210.98 13643.40 38606.66 00:11:11.701 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme0n1p2 : 1.02 6627.28 25.89 0.00 0.00 19197.19 13822.14 36461.85 00:11:11.701 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme1n1 : 1.03 6617.44 25.85 0.00 0.00 19170.95 14239.19 36461.85 00:11:11.701 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme2n1 : 1.03 6607.37 25.81 0.00 0.00 19082.39 12034.79 38130.04 00:11:11.701 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme2n2 : 1.03 6597.37 25.77 0.00 0.00 19066.89 11498.59 39083.29 00:11:11.701 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme2n3 : 1.03 6587.52 25.73 0.00 0.00 19051.74 10783.65 39083.29 00:11:11.701 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:11.701 Nvme3n1 : 1.03 6635.63 25.92 0.00 0.00 18966.16 8996.31 39559.91 00:11:11.701 =================================================================================================================== 00:11:11.701 Total : 46311.32 180.90 0.00 0.00 19106.43 8996.31 39559.91 00:11:12.656 00:11:12.656 real 0m3.463s 00:11:12.656 user 0m3.111s 00:11:12.656 sys 0m0.224s 00:11:12.656 09:16:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.656 09:16:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:12.656 ************************************ 00:11:12.656 END TEST bdev_write_zeroes 00:11:12.656 ************************************ 00:11:12.656 09:16:58 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:12.656 09:16:58 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:12.656 09:16:58 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:12.656 09:16:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.656 09:16:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:12.656 ************************************ 00:11:12.656 START TEST bdev_json_nonenclosed 00:11:12.656 ************************************ 00:11:12.656 09:16:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:12.912 [2024-07-12 09:16:59.107427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:12.912 [2024-07-12 09:16:59.107604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69020 ] 00:11:13.168 [2024-07-12 09:16:59.282366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.168 [2024-07-12 09:16:59.506954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.168 [2024-07-12 09:16:59.507075] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:13.168 [2024-07-12 09:16:59.507114] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:13.168 [2024-07-12 09:16:59.507143] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:13.733 00:11:13.733 real 0m0.929s 00:11:13.733 user 0m0.680s 00:11:13.733 sys 0m0.142s 00:11:13.733 09:16:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:11:13.733 09:16:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.733 09:16:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:13.733 ************************************ 00:11:13.733 END TEST bdev_json_nonenclosed 00:11:13.733 ************************************ 00:11:13.733 09:16:59 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:11:13.733 09:16:59 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:11:13.733 09:16:59 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:13.733 09:16:59 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:13.733 09:16:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.733 09:16:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.733 ************************************ 00:11:13.733 START TEST bdev_json_nonarray 00:11:13.733 ************************************ 00:11:13.733 09:16:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:13.733 [2024-07-12 09:17:00.075073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:13.733 [2024-07-12 09:17:00.075295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69047 ] 00:11:14.003 [2024-07-12 09:17:00.248558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.288 [2024-07-12 09:17:00.474252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.288 [2024-07-12 09:17:00.474385] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:14.288 [2024-07-12 09:17:00.474416] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:14.288 [2024-07-12 09:17:00.474435] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:14.851 00:11:14.851 real 0m0.927s 00:11:14.851 user 0m0.695s 00:11:14.851 sys 0m0.124s 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:14.851 ************************************ 00:11:14.851 END TEST bdev_json_nonarray 00:11:14.851 ************************************ 00:11:14.851 09:17:00 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:11:14.851 09:17:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:11:14.851 09:17:00 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:11:14.851 09:17:00 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:11:14.851 09:17:00 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:14.851 09:17:00 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:14.851 09:17:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.851 09:17:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:14.851 ************************************ 00:11:14.851 START TEST bdev_gpt_uuid 00:11:14.851 ************************************ 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69078 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69078 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69078 ']' 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.851 09:17:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:14.851 [2024-07-12 09:17:01.050654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:14.851 [2024-07-12 09:17:01.050836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69078 ] 00:11:15.108 [2024-07-12 09:17:01.211417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.108 [2024-07-12 09:17:01.409495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.039 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.039 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:11:16.039 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:16.039 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.039 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:16.296 Some configs were skipped because the RPC state that can call them passed over. 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:11:16.296 { 00:11:16.296 "name": "Nvme0n1p1", 00:11:16.296 "aliases": [ 00:11:16.296 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:16.296 ], 00:11:16.296 "product_name": "GPT Disk", 00:11:16.296 "block_size": 4096, 00:11:16.296 "num_blocks": 774144, 00:11:16.296 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:16.296 "md_size": 64, 00:11:16.296 "md_interleave": false, 00:11:16.296 "dif_type": 0, 00:11:16.296 "assigned_rate_limits": { 00:11:16.296 "rw_ios_per_sec": 0, 00:11:16.296 "rw_mbytes_per_sec": 0, 00:11:16.296 "r_mbytes_per_sec": 0, 00:11:16.296 "w_mbytes_per_sec": 0 00:11:16.296 }, 00:11:16.296 "claimed": false, 00:11:16.296 "zoned": false, 00:11:16.296 "supported_io_types": { 00:11:16.296 "read": true, 00:11:16.296 "write": true, 00:11:16.296 "unmap": true, 00:11:16.296 "flush": true, 00:11:16.296 "reset": true, 00:11:16.296 "nvme_admin": false, 00:11:16.296 "nvme_io": false, 00:11:16.296 "nvme_io_md": false, 00:11:16.296 "write_zeroes": true, 00:11:16.296 "zcopy": false, 00:11:16.296 "get_zone_info": false, 00:11:16.296 "zone_management": false, 00:11:16.296 "zone_append": false, 00:11:16.296 "compare": true, 00:11:16.296 "compare_and_write": false, 00:11:16.296 "abort": true, 00:11:16.296 "seek_hole": false, 00:11:16.296 "seek_data": false, 00:11:16.296 "copy": true, 00:11:16.296 "nvme_iov_md": false 00:11:16.296 }, 00:11:16.296 "driver_specific": { 00:11:16.296 "gpt": { 00:11:16.296 "base_bdev": "Nvme0n1", 00:11:16.296 "offset_blocks": 256, 00:11:16.296 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:16.296 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:16.296 "partition_name": "SPDK_TEST_first" 00:11:16.296 } 00:11:16.296 } 00:11:16.296 } 00:11:16.296 ]' 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:16.296 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:11:16.554 { 00:11:16.554 "name": "Nvme0n1p2", 00:11:16.554 "aliases": [ 00:11:16.554 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:16.554 ], 00:11:16.554 "product_name": "GPT Disk", 00:11:16.554 "block_size": 4096, 00:11:16.554 "num_blocks": 774143, 00:11:16.554 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:16.554 "md_size": 64, 00:11:16.554 "md_interleave": false, 00:11:16.554 "dif_type": 0, 00:11:16.554 "assigned_rate_limits": { 00:11:16.554 "rw_ios_per_sec": 0, 00:11:16.554 "rw_mbytes_per_sec": 0, 00:11:16.554 "r_mbytes_per_sec": 0, 00:11:16.554 "w_mbytes_per_sec": 0 00:11:16.554 }, 00:11:16.554 "claimed": false, 00:11:16.554 "zoned": false, 00:11:16.554 "supported_io_types": { 00:11:16.554 "read": true, 00:11:16.554 "write": true, 00:11:16.554 "unmap": true, 00:11:16.554 "flush": true, 00:11:16.554 "reset": true, 00:11:16.554 "nvme_admin": false, 00:11:16.554 "nvme_io": false, 00:11:16.554 "nvme_io_md": false, 00:11:16.554 "write_zeroes": true, 00:11:16.554 "zcopy": false, 00:11:16.554 "get_zone_info": false, 00:11:16.554 "zone_management": false, 00:11:16.554 "zone_append": false, 00:11:16.554 "compare": true, 00:11:16.554 "compare_and_write": false, 00:11:16.554 "abort": true, 00:11:16.554 "seek_hole": false, 00:11:16.554 "seek_data": false, 00:11:16.554 "copy": true, 00:11:16.554 "nvme_iov_md": false 00:11:16.554 }, 00:11:16.554 "driver_specific": { 00:11:16.554 "gpt": { 00:11:16.554 "base_bdev": "Nvme0n1", 00:11:16.554 "offset_blocks": 774400, 00:11:16.554 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:16.554 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:16.554 "partition_name": "SPDK_TEST_second" 00:11:16.554 } 00:11:16.554 } 00:11:16.554 } 00:11:16.554 ]' 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 69078 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69078 ']' 00:11:16.554 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69078 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69078 00:11:16.555 killing process with pid 69078 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69078' 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69078 00:11:16.555 09:17:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69078 00:11:19.085 00:11:19.085 real 0m4.025s 00:11:19.085 user 0m4.433s 00:11:19.085 sys 0m0.434s 00:11:19.085 09:17:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.085 ************************************ 00:11:19.085 09:17:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:19.085 END TEST bdev_gpt_uuid 00:11:19.085 ************************************ 00:11:19.085 09:17:05 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:19.085 09:17:05 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:19.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:19.343 Waiting for block devices as requested 00:11:19.343 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.343 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.602 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.602 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.938 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:24.938 09:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:11:24.938 09:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:11:24.938 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:24.938 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:24.938 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:24.938 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:11:24.938 09:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:24.938 00:11:24.938 real 1m5.456s 00:11:24.938 user 1m23.893s 00:11:24.938 sys 0m9.478s 00:11:24.938 09:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.938 ************************************ 00:11:24.938 09:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:24.938 END TEST blockdev_nvme_gpt 00:11:24.938 ************************************ 00:11:24.938 09:17:11 -- common/autotest_common.sh@1142 -- # return 0 00:11:24.938 09:17:11 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:24.938 09:17:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:24.938 09:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.938 09:17:11 -- common/autotest_common.sh@10 -- # set +x 00:11:24.938 ************************************ 00:11:24.938 START TEST nvme 00:11:24.938 ************************************ 00:11:24.938 09:17:11 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:25.196 * Looking for test storage... 00:11:25.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:25.196 09:17:11 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:26.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.281 09:17:12 nvme -- nvme/nvme.sh@79 -- # uname 00:11:26.281 09:17:12 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:26.281 09:17:12 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:26.281 09:17:12 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:11:26.281 Waiting for stub to ready for secondary processes... 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1069 -- # stubpid=69716 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69716 ]] 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:11:26.281 09:17:12 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:26.281 [2024-07-12 09:17:12.564433] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:26.281 [2024-07-12 09:17:12.564639] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:27.227 [2024-07-12 09:17:13.369046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.227 09:17:13 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:27.227 09:17:13 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69716 ]] 00:11:27.227 09:17:13 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:11:27.485 [2024-07-12 09:17:13.584802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.486 [2024-07-12 09:17:13.584910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.486 [2024-07-12 09:17:13.584911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.486 [2024-07-12 09:17:13.603951] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:27.486 [2024-07-12 09:17:13.604010] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.486 [2024-07-12 09:17:13.615817] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:27.486 [2024-07-12 09:17:13.616415] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:27.486 [2024-07-12 09:17:13.620721] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.486 [2024-07-12 09:17:13.621125] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:27.486 [2024-07-12 09:17:13.621259] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:27.486 [2024-07-12 09:17:13.624249] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.486 [2024-07-12 09:17:13.624717] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:27.486 [2024-07-12 09:17:13.625039] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:27.486 [2024-07-12 09:17:13.628025] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:27.486 [2024-07-12 09:17:13.628429] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:27.486 [2024-07-12 09:17:13.628573] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:27.486 [2024-07-12 09:17:13.628672] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:27.486 [2024-07-12 09:17:13.628763] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:28.421 done. 00:11:28.421 09:17:14 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:28.421 09:17:14 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:11:28.421 09:17:14 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:28.421 09:17:14 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:28.421 09:17:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.421 09:17:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.421 ************************************ 00:11:28.421 START TEST nvme_reset 00:11:28.421 ************************************ 00:11:28.421 09:17:14 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:28.681 Initializing NVMe Controllers 00:11:28.681 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:28.681 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:28.681 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:28.681 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:28.681 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:28.681 00:11:28.681 real 0m0.270s 00:11:28.681 user 0m0.116s 00:11:28.681 sys 0m0.111s 00:11:28.681 09:17:14 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.681 ************************************ 00:11:28.681 END TEST nvme_reset 00:11:28.681 ************************************ 00:11:28.681 09:17:14 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:28.681 09:17:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:28.681 09:17:14 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:28.681 09:17:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:28.681 09:17:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.681 09:17:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.681 ************************************ 00:11:28.681 START TEST nvme_identify 00:11:28.681 ************************************ 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:11:28.681 09:17:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:28.681 09:17:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:28.681 09:17:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:28.681 09:17:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:28.681 09:17:14 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:28.681 09:17:14 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:28.942 ===================================================== 00:11:28.942 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:28.942 ===================================================== 00:11:28.942 Controller Capabilities/Features 00:11:28.942 ================================ 00:11:28.942 Vendor ID: 1b36 00:11:28.942 Subsystem Vendor ID: 1af4 00:11:28.942 Serial Number: 12340 00:11:28.942 Model Number: QEMU NVMe Ctrl 00:11:28.942 Firmware Version: 8.0.0 00:11:28.942 Recommended Arb Burst: 6 00:11:28.942 IEEE OUI Identifier: 00 54 52 00:11:28.942 Multi-path I/O 00:11:28.942 May have multiple subsystem ports: No 00:11:28.942 May have multiple controllers: No 00:11:28.942 Associated with SR-IOV VF: No 00:11:28.942 Max Data Transfer Size: 524288 00:11:28.942 Max Number of Namespaces: 256 00:11:28.942 Max Number of I/O Queues: 64 00:11:28.942 NVMe Specification Version (VS): 1.4 00:11:28.942 NVMe Specification Version (Identify): 1.4 00:11:28.942 Maximum Queue Entries: 2048 00:11:28.942 Contiguous Queues Required: Yes 00:11:28.942 Arbitration Mechanisms Supported 00:11:28.942 Weighted Round Robin: Not Supported 00:11:28.942 Vendor Specific: Not Supported 00:11:28.942 Reset Timeout: 7500 ms 00:11:28.942 Doorbell Stride: 4 bytes 00:11:28.942 NVM Subsystem Reset: Not Supported 00:11:28.942 Command Sets Supported 00:11:28.942 NVM Command Set: Supported 00:11:28.942 Boot Partition: Not Supported 00:11:28.942 Memory Page Size Minimum: 4096 bytes 00:11:28.942 Memory Page Size Maximum: 65536 bytes 00:11:28.942 Persistent Memory Region: Not Supported 00:11:28.942 Optional Asynchronous Events Supported 00:11:28.942 Namespace Attribute Notices: Supported 00:11:28.943 Firmware Activation Notices: Not Supported 00:11:28.943 ANA Change Notices: Not Supported 00:11:28.943 PLE Aggregate Log Change Notices: Not Supported 00:11:28.943 LBA Status Info Alert Notices: Not Supported 00:11:28.943 EGE Aggregate Log Change Notices: Not Supported 00:11:28.943 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.943 Zone Descriptor Change Notices: Not Supported 00:11:28.943 Discovery Log Change Notices: Not Supported 00:11:28.943 Controller Attributes 00:11:28.943 128-bit Host Identifier: Not Supported 00:11:28.943 Non-Operational Permissive Mode: Not Supported 00:11:28.943 NVM Sets: Not Supported 00:11:28.943 Read Recovery Levels: Not Supported 00:11:28.943 Endurance Groups: Not Supported 00:11:28.943 Predictable Latency Mode: Not Supported 00:11:28.943 Traffic Based Keep ALive: Not Supported 00:11:28.943 Namespace Granularity: Not Supported 00:11:28.943 SQ Associations: Not Supported 00:11:28.943 UUID List: Not Supported 00:11:28.943 Multi-Domain Subsystem: Not Supported 00:11:28.943 Fixed Capacity Management: Not Supported 00:11:28.943 Variable Capacity Management: Not Supported 00:11:28.943 Delete Endurance Group: Not Supported 00:11:28.943 Delete NVM Set: Not Supported 00:11:28.943 Extended LBA Formats Supported: Supported 00:11:28.943 Flexible Data Placement Supported: Not Supported 00:11:28.943 00:11:28.943 Controller Memory Buffer Support 00:11:28.943 ================================ 00:11:28.943 Supported: No 00:11:28.943 00:11:28.943 Persistent Memory Region Support 00:11:28.943 ================================ 00:11:28.943 Supported: No 00:11:28.943 00:11:28.943 Admin Command Set Attributes 00:11:28.943 ============================ 00:11:28.943 Security Send/Receive: Not Supported 00:11:28.943 Format NVM: Supported 00:11:28.943 Firmware Activate/Download: Not Supported 00:11:28.943 Namespace Management: Supported 00:11:28.943 Device Self-Test: Not Supported 00:11:28.943 Directives: Supported 00:11:28.943 NVMe-MI: Not Supported 00:11:28.943 Virtualization Management: Not Supported 00:11:28.943 Doorbell Buffer Config: Supported 00:11:28.943 Get LBA Status Capability: Not Supported 00:11:28.943 Command & Feature Lockdown Capability: Not Supported 00:11:28.943 Abort Command Limit: 4 00:11:28.943 Async Event Request Limit: 4 00:11:28.943 Number of Firmware Slots: N/A 00:11:28.943 Firmware Slot 1 Read-Only: N/A 00:11:28.943 Firmware Activation Without Reset: N/A 00:11:28.943 Multiple Update Detection Support: N/A 00:11:28.943 Firmware Update Gr[2024-07-12 09:17:15.164238] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69750 terminated unexpected 00:11:28.943 anularity: No Information Provided 00:11:28.943 Per-Namespace SMART Log: Yes 00:11:28.943 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.943 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:28.943 Command Effects Log Page: Supported 00:11:28.943 Get Log Page Extended Data: Supported 00:11:28.943 Telemetry Log Pages: Not Supported 00:11:28.943 Persistent Event Log Pages: Not Supported 00:11:28.943 Supported Log Pages Log Page: May Support 00:11:28.943 Commands Supported & Effects Log Page: Not Supported 00:11:28.943 Feature Identifiers & Effects Log Page:May Support 00:11:28.943 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.943 Data Area 4 for Telemetry Log: Not Supported 00:11:28.943 Error Log Page Entries Supported: 1 00:11:28.943 Keep Alive: Not Supported 00:11:28.943 00:11:28.943 NVM Command Set Attributes 00:11:28.943 ========================== 00:11:28.943 Submission Queue Entry Size 00:11:28.943 Max: 64 00:11:28.943 Min: 64 00:11:28.943 Completion Queue Entry Size 00:11:28.943 Max: 16 00:11:28.943 Min: 16 00:11:28.943 Number of Namespaces: 256 00:11:28.943 Compare Command: Supported 00:11:28.943 Write Uncorrectable Command: Not Supported 00:11:28.943 Dataset Management Command: Supported 00:11:28.943 Write Zeroes Command: Supported 00:11:28.943 Set Features Save Field: Supported 00:11:28.943 Reservations: Not Supported 00:11:28.943 Timestamp: Supported 00:11:28.943 Copy: Supported 00:11:28.943 Volatile Write Cache: Present 00:11:28.943 Atomic Write Unit (Normal): 1 00:11:28.943 Atomic Write Unit (PFail): 1 00:11:28.943 Atomic Compare & Write Unit: 1 00:11:28.943 Fused Compare & Write: Not Supported 00:11:28.943 Scatter-Gather List 00:11:28.943 SGL Command Set: Supported 00:11:28.943 SGL Keyed: Not Supported 00:11:28.943 SGL Bit Bucket Descriptor: Not Supported 00:11:28.943 SGL Metadata Pointer: Not Supported 00:11:28.943 Oversized SGL: Not Supported 00:11:28.943 SGL Metadata Address: Not Supported 00:11:28.943 SGL Offset: Not Supported 00:11:28.943 Transport SGL Data Block: Not Supported 00:11:28.943 Replay Protected Memory Block: Not Supported 00:11:28.943 00:11:28.943 Firmware Slot Information 00:11:28.943 ========================= 00:11:28.943 Active slot: 1 00:11:28.943 Slot 1 Firmware Revision: 1.0 00:11:28.943 00:11:28.943 00:11:28.943 Commands Supported and Effects 00:11:28.943 ============================== 00:11:28.943 Admin Commands 00:11:28.943 -------------- 00:11:28.943 Delete I/O Submission Queue (00h): Supported 00:11:28.943 Create I/O Submission Queue (01h): Supported 00:11:28.943 Get Log Page (02h): Supported 00:11:28.943 Delete I/O Completion Queue (04h): Supported 00:11:28.943 Create I/O Completion Queue (05h): Supported 00:11:28.943 Identify (06h): Supported 00:11:28.943 Abort (08h): Supported 00:11:28.943 Set Features (09h): Supported 00:11:28.943 Get Features (0Ah): Supported 00:11:28.943 Asynchronous Event Request (0Ch): Supported 00:11:28.943 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.943 Directive Send (19h): Supported 00:11:28.943 Directive Receive (1Ah): Supported 00:11:28.943 Virtualization Management (1Ch): Supported 00:11:28.943 Doorbell Buffer Config (7Ch): Supported 00:11:28.943 Format NVM (80h): Supported LBA-Change 00:11:28.943 I/O Commands 00:11:28.943 ------------ 00:11:28.943 Flush (00h): Supported LBA-Change 00:11:28.943 Write (01h): Supported LBA-Change 00:11:28.943 Read (02h): Supported 00:11:28.943 Compare (05h): Supported 00:11:28.943 Write Zeroes (08h): Supported LBA-Change 00:11:28.943 Dataset Management (09h): Supported LBA-Change 00:11:28.943 Unknown (0Ch): Supported 00:11:28.943 Unknown (12h): Supported 00:11:28.943 Copy (19h): Supported LBA-Change 00:11:28.943 Unknown (1Dh): Supported LBA-Change 00:11:28.943 00:11:28.943 Error Log 00:11:28.943 ========= 00:11:28.943 00:11:28.943 Arbitration 00:11:28.943 =========== 00:11:28.943 Arbitration Burst: no limit 00:11:28.943 00:11:28.943 Power Management 00:11:28.943 ================ 00:11:28.943 Number of Power States: 1 00:11:28.943 Current Power State: Power State #0 00:11:28.943 Power State #0: 00:11:28.943 Max Power: 25.00 W 00:11:28.943 Non-Operational State: Operational 00:11:28.943 Entry Latency: 16 microseconds 00:11:28.943 Exit Latency: 4 microseconds 00:11:28.943 Relative Read Throughput: 0 00:11:28.943 Relative Read Latency: 0 00:11:28.943 Relative Write Throughput: 0 00:11:28.943 Relative Write Latency: 0 00:11:28.943 Idle Power[2024-07-12 09:17:15.165408] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69750 terminated unexpected 00:11:28.943 : Not Reported 00:11:28.943 Active Power: Not Reported 00:11:28.943 Non-Operational Permissive Mode: Not Supported 00:11:28.943 00:11:28.943 Health Information 00:11:28.943 ================== 00:11:28.943 Critical Warnings: 00:11:28.943 Available Spare Space: OK 00:11:28.943 Temperature: OK 00:11:28.943 Device Reliability: OK 00:11:28.943 Read Only: No 00:11:28.943 Volatile Memory Backup: OK 00:11:28.943 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.943 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.943 Available Spare: 0% 00:11:28.943 Available Spare Threshold: 0% 00:11:28.943 Life Percentage Used: 0% 00:11:28.943 Data Units Read: 995 00:11:28.943 Data Units Written: 834 00:11:28.943 Host Read Commands: 48848 00:11:28.943 Host Write Commands: 47441 00:11:28.943 Controller Busy Time: 0 minutes 00:11:28.943 Power Cycles: 0 00:11:28.943 Power On Hours: 0 hours 00:11:28.943 Unsafe Shutdowns: 0 00:11:28.943 Unrecoverable Media Errors: 0 00:11:28.943 Lifetime Error Log Entries: 0 00:11:28.943 Warning Temperature Time: 0 minutes 00:11:28.943 Critical Temperature Time: 0 minutes 00:11:28.943 00:11:28.943 Number of Queues 00:11:28.943 ================ 00:11:28.943 Number of I/O Submission Queues: 64 00:11:28.943 Number of I/O Completion Queues: 64 00:11:28.943 00:11:28.943 ZNS Specific Controller Data 00:11:28.943 ============================ 00:11:28.943 Zone Append Size Limit: 0 00:11:28.943 00:11:28.943 00:11:28.943 Active Namespaces 00:11:28.943 ================= 00:11:28.943 Namespace ID:1 00:11:28.943 Error Recovery Timeout: Unlimited 00:11:28.943 Command Set Identifier: NVM (00h) 00:11:28.943 Deallocate: Supported 00:11:28.943 Deallocated/Unwritten Error: Supported 00:11:28.943 Deallocated Read Value: All 0x00 00:11:28.943 Deallocate in Write Zeroes: Not Supported 00:11:28.943 Deallocated Guard Field: 0xFFFF 00:11:28.943 Flush: Supported 00:11:28.944 Reservation: Not Supported 00:11:28.944 Metadata Transferred as: Separate Metadata Buffer 00:11:28.944 Namespace Sharing Capabilities: Private 00:11:28.944 Size (in LBAs): 1548666 (5GiB) 00:11:28.944 Capacity (in LBAs): 1548666 (5GiB) 00:11:28.944 Utilization (in LBAs): 1548666 (5GiB) 00:11:28.944 Thin Provisioning: Not Supported 00:11:28.944 Per-NS Atomic Units: No 00:11:28.944 Maximum Single Source Range Length: 128 00:11:28.944 Maximum Copy Length: 128 00:11:28.944 Maximum Source Range Count: 128 00:11:28.944 NGUID/EUI64 Never Reused: No 00:11:28.944 Namespace Write Protected: No 00:11:28.944 Number of LBA Formats: 8 00:11:28.944 Current LBA Format: LBA Format #07 00:11:28.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.944 00:11:28.944 NVM Specific Namespace Data 00:11:28.944 =========================== 00:11:28.944 Logical Block Storage Tag Mask: 0 00:11:28.944 Protection Information Capabilities: 00:11:28.944 16b Guard Protection Information Storage Tag Support: No 00:11:28.944 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.944 Storage Tag Check Read Support: No 00:11:28.944 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.944 ===================================================== 00:11:28.944 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:28.944 ===================================================== 00:11:28.944 Controller Capabilities/Features 00:11:28.944 ================================ 00:11:28.944 Vendor ID: 1b36 00:11:28.944 Subsystem Vendor ID: 1af4 00:11:28.944 Serial Number: 12341 00:11:28.944 Model Number: QEMU NVMe Ctrl 00:11:28.944 Firmware Version: 8.0.0 00:11:28.944 Recommended Arb Burst: 6 00:11:28.944 IEEE OUI Identifier: 00 54 52 00:11:28.944 Multi-path I/O 00:11:28.944 May have multiple subsystem ports: No 00:11:28.944 May have multiple controllers: No 00:11:28.944 Associated with SR-IOV VF: No 00:11:28.944 Max Data Transfer Size: 524288 00:11:28.944 Max Number of Namespaces: 256 00:11:28.944 Max Number of I/O Queues: 64 00:11:28.944 NVMe Specification Version (VS): 1.4 00:11:28.944 NVMe Specification Version (Identify): 1.4 00:11:28.944 Maximum Queue Entries: 2048 00:11:28.944 Contiguous Queues Required: Yes 00:11:28.944 Arbitration Mechanisms Supported 00:11:28.944 Weighted Round Robin: Not Supported 00:11:28.944 Vendor Specific: Not Supported 00:11:28.944 Reset Timeout: 7500 ms 00:11:28.944 Doorbell Stride: 4 bytes 00:11:28.944 NVM Subsystem Reset: Not Supported 00:11:28.944 Command Sets Supported 00:11:28.944 NVM Command Set: Supported 00:11:28.944 Boot Partition: Not Supported 00:11:28.944 Memory Page Size Minimum: 4096 bytes 00:11:28.944 Memory Page Size Maximum: 65536 bytes 00:11:28.944 Persistent Memory Region: Not Supported 00:11:28.944 Optional Asynchronous Events Supported 00:11:28.944 Namespace Attribute Notices: Supported 00:11:28.944 Firmware Activation Notices: Not Supported 00:11:28.944 ANA Change Notices: Not Supported 00:11:28.944 PLE Aggregate Log Change Notices: Not Supported 00:11:28.944 LBA Status Info Alert Notices: Not Supported 00:11:28.944 EGE Aggregate Log Change Notices: Not Supported 00:11:28.944 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.944 Zone Descriptor Change Notices: Not Supported 00:11:28.944 Discovery Log Change Notices: Not Supported 00:11:28.944 Controller Attributes 00:11:28.944 128-bit Host Identifier: Not Supported 00:11:28.944 Non-Operational Permissive Mode: Not Supported 00:11:28.944 NVM Sets: Not Supported 00:11:28.944 Read Recovery Levels: Not Supported 00:11:28.944 Endurance Groups: Not Supported 00:11:28.944 Predictable Latency Mode: Not Supported 00:11:28.944 Traffic Based Keep ALive: Not Supported 00:11:28.944 Namespace Granularity: Not Supported 00:11:28.944 SQ Associations: Not Supported 00:11:28.944 UUID List: Not Supported 00:11:28.944 Multi-Domain Subsystem: Not Supported 00:11:28.944 Fixed Capacity Management: Not Supported 00:11:28.944 Variable Capacity Management: Not Supported 00:11:28.944 Delete Endurance Group: Not Supported 00:11:28.944 Delete NVM Set: Not Supported 00:11:28.944 Extended LBA Formats Supported: Supported 00:11:28.944 Flexible Data Placement Supported: Not Supported 00:11:28.944 00:11:28.944 Controller Memory Buffer Support 00:11:28.944 ================================ 00:11:28.944 Supported: No 00:11:28.944 00:11:28.944 Persistent Memory Region Support 00:11:28.944 ================================ 00:11:28.944 Supported: No 00:11:28.944 00:11:28.944 Admin Command Set Attributes 00:11:28.944 ============================ 00:11:28.944 Security Send/Receive: Not Supported 00:11:28.944 Format NVM: Supported 00:11:28.944 Firmware Activate/Download: Not Supported 00:11:28.944 Namespace Management: Supported 00:11:28.944 Device Self-Test: Not Supported 00:11:28.944 Directives: Supported 00:11:28.944 NVMe-MI: Not Supported 00:11:28.944 Virtualization Management: Not Supported 00:11:28.944 Doorbell Buffer Config: Supported 00:11:28.944 Get LBA Status Capability: Not Supported 00:11:28.944 Command & Feature Lockdown Capability: Not Supported 00:11:28.944 Abort Command Limit: 4 00:11:28.944 Async Event Request Limit: 4 00:11:28.944 Number of Firmware Slots: N/A 00:11:28.944 Firmware Slot 1 Read-Only: N/A 00:11:28.944 Firmware Activation Without Reset: N/A 00:11:28.944 Multiple Update Detection Support: N/A 00:11:28.944 Firmware Update Granularity: No Information Provided 00:11:28.944 Per-Namespace SMART Log: Yes 00:11:28.944 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.944 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:28.944 Command Effects Log Page: Supported 00:11:28.944 Get Log Page Extended Data: Supported 00:11:28.944 Telemetry Log Pages: Not Supported 00:11:28.944 Persistent Event Log Pages: Not Supported 00:11:28.944 Supported Log Pages Log Page: May Support 00:11:28.944 Commands Supported & Effects Log Page: Not Supported 00:11:28.944 Feature Identifiers & Effects Log Page:May Support 00:11:28.944 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.944 Data Area 4 for Telemetry Log: Not Supported 00:11:28.944 Error Log Page Entries Supported: 1 00:11:28.944 Keep Alive: Not Supported 00:11:28.944 00:11:28.944 NVM Command Set Attributes 00:11:28.944 ========================== 00:11:28.944 Submission Queue Entry Size 00:11:28.944 Max: 64 00:11:28.944 Min: 64 00:11:28.944 Completion Queue Entry Size 00:11:28.944 Max: 16 00:11:28.944 Min: 16 00:11:28.944 Number of Namespaces: 256 00:11:28.944 Compare Command: Supported 00:11:28.944 Write Uncorrectable Command: Not Supported 00:11:28.944 Dataset Management Command: Supported 00:11:28.944 Write Zeroes Command: Supported 00:11:28.944 Set Features Save Field: Supported 00:11:28.944 Reservations: Not Supported 00:11:28.944 Timestamp: Supported 00:11:28.944 Copy: Supported 00:11:28.944 Volatile Write Cache: Present 00:11:28.944 Atomic Write Unit (Normal): 1 00:11:28.944 Atomic Write Unit (PFail): 1 00:11:28.944 Atomic Compare & Write Unit: 1 00:11:28.944 Fused Compare & Write: Not Supported 00:11:28.944 Scatter-Gather List 00:11:28.944 SGL Command Set: Supported 00:11:28.944 SGL Keyed: Not Supported 00:11:28.944 SGL Bit Bucket Descriptor: Not Supported 00:11:28.944 SGL Metadata Pointer: Not Supported 00:11:28.944 Oversized SGL: Not Supported 00:11:28.944 SGL Metadata Address: Not Supported 00:11:28.944 SGL Offset: Not Supported 00:11:28.944 Transport SGL Data Block: Not Supported 00:11:28.944 Replay Protected Memory Block: Not Supported 00:11:28.944 00:11:28.944 Firmware Slot Information 00:11:28.944 ========================= 00:11:28.944 Active slot: 1 00:11:28.944 Slot 1 Firmware Revision: 1.0 00:11:28.944 00:11:28.944 00:11:28.944 Commands Supported and Effects 00:11:28.944 ============================== 00:11:28.944 Admin Commands 00:11:28.944 -------------- 00:11:28.944 Delete I/O Submission Queue (00h): Supported 00:11:28.944 Create I/O Submission Queue (01h): Supported 00:11:28.944 Get Log Page (02h): Supported 00:11:28.944 Delete I/O Completion Queue (04h): Supported 00:11:28.944 Create I/O Completion Queue (05h): Supported 00:11:28.944 Identify (06h): Supported 00:11:28.944 Abort (08h): Supported 00:11:28.944 Set Features (09h): Supported 00:11:28.944 Get Features (0Ah): Supported 00:11:28.944 Asynchronous Event Request (0Ch): Supported 00:11:28.944 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.944 Directive Send (19h): Supported 00:11:28.944 Directive Receive (1Ah): Supported 00:11:28.945 Virtualization Management (1Ch): Supported 00:11:28.945 Doorbell Buffer Config (7Ch): Supported 00:11:28.945 Format NVM (80h): Supported LBA-Change 00:11:28.945 I/O Commands 00:11:28.945 ------------ 00:11:28.945 Flush (00h): Supported LBA-Change 00:11:28.945 Write (01h): Supported LBA-Change 00:11:28.945 Read (02h): Supported 00:11:28.945 Compare (05h): Supported 00:11:28.945 Write Zeroes (08h): Supported LBA-Change 00:11:28.945 Dataset Management (09h): Supported LBA-Change 00:11:28.945 Unknown (0Ch): Supported 00:11:28.945 Unknown (12h): Supported 00:11:28.945 Copy (19h): Supported LBA-Change 00:11:28.945 Unknown (1Dh): Supported LBA-Change 00:11:28.945 00:11:28.945 Error Log 00:11:28.945 ========= 00:11:28.945 00:11:28.945 Arbitration 00:11:28.945 =========== 00:11:28.945 Arbitration Burst: no limit 00:11:28.945 00:11:28.945 Power Management 00:11:28.945 ================ 00:11:28.945 Number of Power States: 1 00:11:28.945 Current Power State: Power State #0 00:11:28.945 Power State #0: 00:11:28.945 Max Power: 25.00 W 00:11:28.945 Non-Operational State: Operational 00:11:28.945 Entry Latency: 16 microseconds 00:11:28.945 Exit Latency: 4 microseconds 00:11:28.945 Relative Read Throughput: 0 00:11:28.945 Relative Read Latency: 0 00:11:28.945 Relative Write Throughput: 0 00:11:28.945 Relative Write Latency: 0 00:11:28.945 Idle Power: Not Reported 00:11:28.945 Active Power: Not Reported 00:11:28.945 Non-Operational Permissive Mode: Not Supported 00:11:28.945 00:11:28.945 Health Information 00:11:28.945 ================== 00:11:28.945 Critical Warnings: 00:11:28.945 Available Spare Space: OK 00:11:28.945 Temperature: [2024-07-12 09:17:15.166398] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69750 terminated unexpected 00:11:28.945 OK 00:11:28.945 Device Reliability: OK 00:11:28.945 Read Only: No 00:11:28.945 Volatile Memory Backup: OK 00:11:28.945 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.945 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.945 Available Spare: 0% 00:11:28.945 Available Spare Threshold: 0% 00:11:28.945 Life Percentage Used: 0% 00:11:28.945 Data Units Read: 735 00:11:28.945 Data Units Written: 581 00:11:28.945 Host Read Commands: 34776 00:11:28.945 Host Write Commands: 32426 00:11:28.945 Controller Busy Time: 0 minutes 00:11:28.945 Power Cycles: 0 00:11:28.945 Power On Hours: 0 hours 00:11:28.945 Unsafe Shutdowns: 0 00:11:28.945 Unrecoverable Media Errors: 0 00:11:28.945 Lifetime Error Log Entries: 0 00:11:28.945 Warning Temperature Time: 0 minutes 00:11:28.945 Critical Temperature Time: 0 minutes 00:11:28.945 00:11:28.945 Number of Queues 00:11:28.945 ================ 00:11:28.945 Number of I/O Submission Queues: 64 00:11:28.945 Number of I/O Completion Queues: 64 00:11:28.945 00:11:28.945 ZNS Specific Controller Data 00:11:28.945 ============================ 00:11:28.945 Zone Append Size Limit: 0 00:11:28.945 00:11:28.945 00:11:28.945 Active Namespaces 00:11:28.945 ================= 00:11:28.945 Namespace ID:1 00:11:28.945 Error Recovery Timeout: Unlimited 00:11:28.945 Command Set Identifier: NVM (00h) 00:11:28.945 Deallocate: Supported 00:11:28.945 Deallocated/Unwritten Error: Supported 00:11:28.945 Deallocated Read Value: All 0x00 00:11:28.945 Deallocate in Write Zeroes: Not Supported 00:11:28.945 Deallocated Guard Field: 0xFFFF 00:11:28.945 Flush: Supported 00:11:28.945 Reservation: Not Supported 00:11:28.945 Namespace Sharing Capabilities: Private 00:11:28.945 Size (in LBAs): 1310720 (5GiB) 00:11:28.945 Capacity (in LBAs): 1310720 (5GiB) 00:11:28.945 Utilization (in LBAs): 1310720 (5GiB) 00:11:28.945 Thin Provisioning: Not Supported 00:11:28.945 Per-NS Atomic Units: No 00:11:28.945 Maximum Single Source Range Length: 128 00:11:28.945 Maximum Copy Length: 128 00:11:28.945 Maximum Source Range Count: 128 00:11:28.945 NGUID/EUI64 Never Reused: No 00:11:28.945 Namespace Write Protected: No 00:11:28.945 Number of LBA Formats: 8 00:11:28.945 Current LBA Format: LBA Format #04 00:11:28.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.945 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.945 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.945 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.945 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.945 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.945 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.945 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.945 00:11:28.945 NVM Specific Namespace Data 00:11:28.945 =========================== 00:11:28.945 Logical Block Storage Tag Mask: 0 00:11:28.945 Protection Information Capabilities: 00:11:28.945 16b Guard Protection Information Storage Tag Support: No 00:11:28.945 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.945 Storage Tag Check Read Support: No 00:11:28.945 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.945 ===================================================== 00:11:28.945 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:28.945 ===================================================== 00:11:28.945 Controller Capabilities/Features 00:11:28.945 ================================ 00:11:28.945 Vendor ID: 1b36 00:11:28.945 Subsystem Vendor ID: 1af4 00:11:28.945 Serial Number: 12343 00:11:28.945 Model Number: QEMU NVMe Ctrl 00:11:28.945 Firmware Version: 8.0.0 00:11:28.945 Recommended Arb Burst: 6 00:11:28.945 IEEE OUI Identifier: 00 54 52 00:11:28.945 Multi-path I/O 00:11:28.945 May have multiple subsystem ports: No 00:11:28.945 May have multiple controllers: Yes 00:11:28.945 Associated with SR-IOV VF: No 00:11:28.945 Max Data Transfer Size: 524288 00:11:28.945 Max Number of Namespaces: 256 00:11:28.945 Max Number of I/O Queues: 64 00:11:28.945 NVMe Specification Version (VS): 1.4 00:11:28.945 NVMe Specification Version (Identify): 1.4 00:11:28.945 Maximum Queue Entries: 2048 00:11:28.945 Contiguous Queues Required: Yes 00:11:28.945 Arbitration Mechanisms Supported 00:11:28.945 Weighted Round Robin: Not Supported 00:11:28.945 Vendor Specific: Not Supported 00:11:28.945 Reset Timeout: 7500 ms 00:11:28.945 Doorbell Stride: 4 bytes 00:11:28.945 NVM Subsystem Reset: Not Supported 00:11:28.945 Command Sets Supported 00:11:28.945 NVM Command Set: Supported 00:11:28.945 Boot Partition: Not Supported 00:11:28.945 Memory Page Size Minimum: 4096 bytes 00:11:28.945 Memory Page Size Maximum: 65536 bytes 00:11:28.945 Persistent Memory Region: Not Supported 00:11:28.945 Optional Asynchronous Events Supported 00:11:28.945 Namespace Attribute Notices: Supported 00:11:28.945 Firmware Activation Notices: Not Supported 00:11:28.945 ANA Change Notices: Not Supported 00:11:28.945 PLE Aggregate Log Change Notices: Not Supported 00:11:28.945 LBA Status Info Alert Notices: Not Supported 00:11:28.945 EGE Aggregate Log Change Notices: Not Supported 00:11:28.945 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.945 Zone Descriptor Change Notices: Not Supported 00:11:28.945 Discovery Log Change Notices: Not Supported 00:11:28.945 Controller Attributes 00:11:28.945 128-bit Host Identifier: Not Supported 00:11:28.945 Non-Operational Permissive Mode: Not Supported 00:11:28.945 NVM Sets: Not Supported 00:11:28.945 Read Recovery Levels: Not Supported 00:11:28.945 Endurance Groups: Supported 00:11:28.945 Predictable Latency Mode: Not Supported 00:11:28.945 Traffic Based Keep ALive: Not Supported 00:11:28.945 Namespace Granularity: Not Supported 00:11:28.945 SQ Associations: Not Supported 00:11:28.945 UUID List: Not Supported 00:11:28.945 Multi-Domain Subsystem: Not Supported 00:11:28.945 Fixed Capacity Management: Not Supported 00:11:28.945 Variable Capacity Management: Not Supported 00:11:28.945 Delete Endurance Group: Not Supported 00:11:28.945 Delete NVM Set: Not Supported 00:11:28.945 Extended LBA Formats Supported: Supported 00:11:28.945 Flexible Data Placement Supported: Supported 00:11:28.945 00:11:28.945 Controller Memory Buffer Support 00:11:28.945 ================================ 00:11:28.945 Supported: No 00:11:28.945 00:11:28.945 Persistent Memory Region Support 00:11:28.945 ================================ 00:11:28.945 Supported: No 00:11:28.945 00:11:28.945 Admin Command Set Attributes 00:11:28.945 ============================ 00:11:28.945 Security Send/Receive: Not Supported 00:11:28.945 Format NVM: Supported 00:11:28.945 Firmware Activate/Download: Not Supported 00:11:28.945 Namespace Management: Supported 00:11:28.945 Device Self-Test: Not Supported 00:11:28.945 Directives: Supported 00:11:28.945 NVMe-MI: Not Supported 00:11:28.945 Virtualization Management: Not Supported 00:11:28.945 Doorbell Buffer Config: Supported 00:11:28.945 Get LBA Status Capability: Not Supported 00:11:28.946 Command & Feature Lockdown Capability: Not Supported 00:11:28.946 Abort Command Limit: 4 00:11:28.946 Async Event Request Limit: 4 00:11:28.946 Number of Firmware Slots: N/A 00:11:28.946 Firmware Slot 1 Read-Only: N/A 00:11:28.946 Firmware Activation Without Reset: N/A 00:11:28.946 Multiple Update Detection Support: N/A 00:11:28.946 Firmware Update Granularity: No Information Provided 00:11:28.946 Per-Namespace SMART Log: Yes 00:11:28.946 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.946 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:28.946 Command Effects Log Page: Supported 00:11:28.946 Get Log Page Extended Data: Supported 00:11:28.946 Telemetry Log Pages: Not Supported 00:11:28.946 Persistent Event Log Pages: Not Supported 00:11:28.946 Supported Log Pages Log Page: May Support 00:11:28.946 Commands Supported & Effects Log Page: Not Supported 00:11:28.946 Feature Identifiers & Effects Log Page:May Support 00:11:28.946 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.946 Data Area 4 for Telemetry Log: Not Supported 00:11:28.946 Error Log Page Entries Supported: 1 00:11:28.946 Keep Alive: Not Supported 00:11:28.946 00:11:28.946 NVM Command Set Attributes 00:11:28.946 ========================== 00:11:28.946 Submission Queue Entry Size 00:11:28.946 Max: 64 00:11:28.946 Min: 64 00:11:28.946 Completion Queue Entry Size 00:11:28.946 Max: 16 00:11:28.946 Min: 16 00:11:28.946 Number of Namespaces: 256 00:11:28.946 Compare Command: Supported 00:11:28.946 Write Uncorrectable Command: Not Supported 00:11:28.946 Dataset Management Command: Supported 00:11:28.946 Write Zeroes Command: Supported 00:11:28.946 Set Features Save Field: Supported 00:11:28.946 Reservations: Not Supported 00:11:28.946 Timestamp: Supported 00:11:28.946 Copy: Supported 00:11:28.946 Volatile Write Cache: Present 00:11:28.946 Atomic Write Unit (Normal): 1 00:11:28.946 Atomic Write Unit (PFail): 1 00:11:28.946 Atomic Compare & Write Unit: 1 00:11:28.946 Fused Compare & Write: Not Supported 00:11:28.946 Scatter-Gather List 00:11:28.946 SGL Command Set: Supported 00:11:28.946 SGL Keyed: Not Supported 00:11:28.946 SGL Bit Bucket Descriptor: Not Supported 00:11:28.946 SGL Metadata Pointer: Not Supported 00:11:28.946 Oversized SGL: Not Supported 00:11:28.946 SGL Metadata Address: Not Supported 00:11:28.946 SGL Offset: Not Supported 00:11:28.946 Transport SGL Data Block: Not Supported 00:11:28.946 Replay Protected Memory Block: Not Supported 00:11:28.946 00:11:28.946 Firmware Slot Information 00:11:28.946 ========================= 00:11:28.946 Active slot: 1 00:11:28.946 Slot 1 Firmware Revision: 1.0 00:11:28.946 00:11:28.946 00:11:28.946 Commands Supported and Effects 00:11:28.946 ============================== 00:11:28.946 Admin Commands 00:11:28.946 -------------- 00:11:28.946 Delete I/O Submission Queue (00h): Supported 00:11:28.946 Create I/O Submission Queue (01h): Supported 00:11:28.946 Get Log Page (02h): Supported 00:11:28.946 Delete I/O Completion Queue (04h): Supported 00:11:28.946 Create I/O Completion Queue (05h): Supported 00:11:28.946 Identify (06h): Supported 00:11:28.946 Abort (08h): Supported 00:11:28.946 Set Features (09h): Supported 00:11:28.946 Get Features (0Ah): Supported 00:11:28.946 Asynchronous Event Request (0Ch): Supported 00:11:28.946 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.946 Directive Send (19h): Supported 00:11:28.946 Directive Receive (1Ah): Supported 00:11:28.946 Virtualization Management (1Ch): Supported 00:11:28.946 Doorbell Buffer Config (7Ch): Supported 00:11:28.946 Format NVM (80h): Supported LBA-Change 00:11:28.946 I/O Commands 00:11:28.946 ------------ 00:11:28.946 Flush (00h): Supported LBA-Change 00:11:28.946 Write (01h): Supported LBA-Change 00:11:28.946 Read (02h): Supported 00:11:28.946 Compare (05h): Supported 00:11:28.946 Write Zeroes (08h): Supported LBA-Change 00:11:28.946 Dataset Management (09h): Supported LBA-Change 00:11:28.946 Unknown (0Ch): Supported 00:11:28.946 Unknown (12h): Supported 00:11:28.946 Copy (19h): Supported LBA-Change 00:11:28.946 Unknown (1Dh): Supported LBA-Change 00:11:28.946 00:11:28.946 Error Log 00:11:28.946 ========= 00:11:28.946 00:11:28.946 Arbitration 00:11:28.946 =========== 00:11:28.946 Arbitration Burst: no limit 00:11:28.946 00:11:28.946 Power Management 00:11:28.946 ================ 00:11:28.946 Number of Power States: 1 00:11:28.946 Current Power State: Power State #0 00:11:28.946 Power State #0: 00:11:28.946 Max Power: 25.00 W 00:11:28.946 Non-Operational State: Operational 00:11:28.946 Entry Latency: 16 microseconds 00:11:28.946 Exit Latency: 4 microseconds 00:11:28.946 Relative Read Throughput: 0 00:11:28.946 Relative Read Latency: 0 00:11:28.946 Relative Write Throughput: 0 00:11:28.946 Relative Write Latency: 0 00:11:28.946 Idle Power: Not Reported 00:11:28.946 Active Power: Not Reported 00:11:28.946 Non-Operational Permissive Mode: Not Supported 00:11:28.946 00:11:28.946 Health Information 00:11:28.946 ================== 00:11:28.946 Critical Warnings: 00:11:28.946 Available Spare Space: OK 00:11:28.946 Temperature: OK 00:11:28.946 Device Reliability: OK 00:11:28.946 Read Only: No 00:11:28.946 Volatile Memory Backup: OK 00:11:28.946 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.946 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.946 Available Spare: 0% 00:11:28.946 Available Spare Threshold: 0% 00:11:28.946 Life Percentage Used: 0% 00:11:28.946 Data Units Read: 803 00:11:28.946 Data Units Written: 696 00:11:28.946 Host Read Commands: 34683 00:11:28.946 Host Write Commands: 33273 00:11:28.946 Controller Busy Time: 0 minutes 00:11:28.946 Power Cycles: 0 00:11:28.946 Power On Hours: 0 hours 00:11:28.946 Unsafe Shutdowns: 0 00:11:28.946 Unrecoverable Media Errors: 0 00:11:28.946 Lifetime Error Log Entries: 0 00:11:28.946 Warning Temperature Time: 0 minutes 00:11:28.946 Critical Temperature Time: 0 minutes 00:11:28.946 00:11:28.946 Number of Queues 00:11:28.946 ================ 00:11:28.946 Number of I/O Submission Queues: 64 00:11:28.946 Number of I/O Completion Queues: 64 00:11:28.946 00:11:28.946 ZNS Specific Controller Data 00:11:28.946 ============================ 00:11:28.946 Zone Append Size Limit: 0 00:11:28.946 00:11:28.946 00:11:28.946 Active Namespaces 00:11:28.946 ================= 00:11:28.946 Namespace ID:1 00:11:28.946 Error Recovery Timeout: Unlimited 00:11:28.946 Command Set Identifier: NVM (00h) 00:11:28.946 Deallocate: Supported 00:11:28.946 Deallocated/Unwritten Error: Supported 00:11:28.946 Deallocated Read Value: All 0x00 00:11:28.946 Deallocate in Write Zeroes: Not Supported 00:11:28.946 Deallocated Guard Field: 0xFFFF 00:11:28.946 Flush: Supported 00:11:28.946 Reservation: Not Supported 00:11:28.946 Namespace Sharing Capabilities: Multiple Controllers 00:11:28.946 Size (in LBAs): 262144 (1GiB) 00:11:28.946 Capacity (in LBAs): 262144 (1GiB) 00:11:28.946 Utilization (in LBAs): 262144 (1GiB) 00:11:28.946 Thin Provisioning: Not Supported 00:11:28.946 Per-NS Atomic Units: No 00:11:28.946 Maximum Single Source Range Length: 128 00:11:28.946 Maximum Copy Length: 128 00:11:28.946 Maximum Source Range Count: 128 00:11:28.946 NGUID/EUI64 Never Reused: No 00:11:28.946 Namespace Write Protected: No 00:11:28.946 Endurance group ID: 1 00:11:28.946 Number of LBA Formats: 8 00:11:28.946 Current LBA Format: LBA Format #04 00:11:28.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.946 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.946 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.946 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.946 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.946 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.946 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.946 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.946 00:11:28.946 Get Feature FDP: 00:11:28.946 ================ 00:11:28.946 Enabled: Yes 00:11:28.946 FDP configuration index: 0 00:11:28.946 00:11:28.946 FDP configurations log page 00:11:28.946 =========================== 00:11:28.946 Number of FDP configurations: 1 00:11:28.946 Version: 0 00:11:28.946 Size: 112 00:11:28.946 FDP Configuration Descriptor: 0 00:11:28.946 Descriptor Size: 96 00:11:28.946 Reclaim Group Identifier format: 2 00:11:28.946 FDP Volatile Write Cache: Not Present 00:11:28.946 FDP Configuration: Valid 00:11:28.946 Vendor Specific Size: 0 00:11:28.946 Number of Reclaim Groups: 2 00:11:28.946 Number of Recalim Unit Handles: 8 00:11:28.946 Max Placement Identifiers: 128 00:11:28.946 Number of Namespaces Suppprted: 256 00:11:28.946 Reclaim unit Nominal Size: 6000000 bytes 00:11:28.946 Estimated Reclaim Unit Time Limit: Not Reported 00:11:28.946 RUH Desc #000: RUH Type: Initially Isolated 00:11:28.946 RUH Desc #001: RUH Type: Initially Isolated 00:11:28.946 RUH Desc #002: RUH Type: Initially Isolated 00:11:28.946 RUH Desc #003: RUH Type: Initially Isolated 00:11:28.946 RUH Desc #004: RUH Type: Initially Isolated 00:11:28.947 RUH Desc #005: RUH Type: Initially Isolated 00:11:28.947 RUH Desc #006: RUH Type: Initially Isolated 00:11:28.947 RUH Desc #007: RUH Type: Initially Isolated 00:11:28.947 00:11:28.947 FDP reclaim unit handle usage log page 00:11:28.947 ====================================== 00:11:28.947 Number of Reclaim Unit Handles: 8 00:11:28.947 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:28.947 RUH Usage Desc #001: RUH Attributes: Unused 00:11:28.947 RUH Usage Desc #002: RUH Attributes: Unused 00:11:28.947 RUH Usage Desc #003: RUH Attributes: Unused 00:11:28.947 RUH Usage Desc #004: RUH Attributes: Unused 00:11:28.947 RUH Usage Desc #005: RUH Attributes: Unused 00:11:28.947 RUH Usage Desc #006: RUH Attributes: Unused 00:11:28.947 RUH Usage Desc #007: RUH Attributes: Unused 00:11:28.947 00:11:28.947 FDP statistics log page 00:11:28.947 ======================= 00:11:28.947 Host bytes with metadata written: 434282496 00:11:28.947 Media[2024-07-12 09:17:15.168168] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69750 terminated unexpected 00:11:28.947 bytes with metadata written: 434348032 00:11:28.947 Media bytes erased: 0 00:11:28.947 00:11:28.947 FDP events log page 00:11:28.947 =================== 00:11:28.947 Number of FDP events: 0 00:11:28.947 00:11:28.947 NVM Specific Namespace Data 00:11:28.947 =========================== 00:11:28.947 Logical Block Storage Tag Mask: 0 00:11:28.947 Protection Information Capabilities: 00:11:28.947 16b Guard Protection Information Storage Tag Support: No 00:11:28.947 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.947 Storage Tag Check Read Support: No 00:11:28.947 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.947 ===================================================== 00:11:28.947 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:28.947 ===================================================== 00:11:28.947 Controller Capabilities/Features 00:11:28.947 ================================ 00:11:28.947 Vendor ID: 1b36 00:11:28.947 Subsystem Vendor ID: 1af4 00:11:28.947 Serial Number: 12342 00:11:28.947 Model Number: QEMU NVMe Ctrl 00:11:28.947 Firmware Version: 8.0.0 00:11:28.947 Recommended Arb Burst: 6 00:11:28.947 IEEE OUI Identifier: 00 54 52 00:11:28.947 Multi-path I/O 00:11:28.947 May have multiple subsystem ports: No 00:11:28.947 May have multiple controllers: No 00:11:28.947 Associated with SR-IOV VF: No 00:11:28.947 Max Data Transfer Size: 524288 00:11:28.947 Max Number of Namespaces: 256 00:11:28.947 Max Number of I/O Queues: 64 00:11:28.947 NVMe Specification Version (VS): 1.4 00:11:28.947 NVMe Specification Version (Identify): 1.4 00:11:28.947 Maximum Queue Entries: 2048 00:11:28.947 Contiguous Queues Required: Yes 00:11:28.947 Arbitration Mechanisms Supported 00:11:28.947 Weighted Round Robin: Not Supported 00:11:28.947 Vendor Specific: Not Supported 00:11:28.947 Reset Timeout: 7500 ms 00:11:28.947 Doorbell Stride: 4 bytes 00:11:28.947 NVM Subsystem Reset: Not Supported 00:11:28.947 Command Sets Supported 00:11:28.947 NVM Command Set: Supported 00:11:28.947 Boot Partition: Not Supported 00:11:28.947 Memory Page Size Minimum: 4096 bytes 00:11:28.947 Memory Page Size Maximum: 65536 bytes 00:11:28.947 Persistent Memory Region: Not Supported 00:11:28.947 Optional Asynchronous Events Supported 00:11:28.947 Namespace Attribute Notices: Supported 00:11:28.947 Firmware Activation Notices: Not Supported 00:11:28.947 ANA Change Notices: Not Supported 00:11:28.947 PLE Aggregate Log Change Notices: Not Supported 00:11:28.947 LBA Status Info Alert Notices: Not Supported 00:11:28.947 EGE Aggregate Log Change Notices: Not Supported 00:11:28.947 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.947 Zone Descriptor Change Notices: Not Supported 00:11:28.947 Discovery Log Change Notices: Not Supported 00:11:28.947 Controller Attributes 00:11:28.947 128-bit Host Identifier: Not Supported 00:11:28.947 Non-Operational Permissive Mode: Not Supported 00:11:28.947 NVM Sets: Not Supported 00:11:28.947 Read Recovery Levels: Not Supported 00:11:28.947 Endurance Groups: Not Supported 00:11:28.947 Predictable Latency Mode: Not Supported 00:11:28.947 Traffic Based Keep ALive: Not Supported 00:11:28.947 Namespace Granularity: Not Supported 00:11:28.947 SQ Associations: Not Supported 00:11:28.947 UUID List: Not Supported 00:11:28.947 Multi-Domain Subsystem: Not Supported 00:11:28.947 Fixed Capacity Management: Not Supported 00:11:28.947 Variable Capacity Management: Not Supported 00:11:28.947 Delete Endurance Group: Not Supported 00:11:28.947 Delete NVM Set: Not Supported 00:11:28.947 Extended LBA Formats Supported: Supported 00:11:28.947 Flexible Data Placement Supported: Not Supported 00:11:28.947 00:11:28.947 Controller Memory Buffer Support 00:11:28.947 ================================ 00:11:28.947 Supported: No 00:11:28.947 00:11:28.947 Persistent Memory Region Support 00:11:28.947 ================================ 00:11:28.947 Supported: No 00:11:28.947 00:11:28.947 Admin Command Set Attributes 00:11:28.947 ============================ 00:11:28.947 Security Send/Receive: Not Supported 00:11:28.947 Format NVM: Supported 00:11:28.947 Firmware Activate/Download: Not Supported 00:11:28.947 Namespace Management: Supported 00:11:28.947 Device Self-Test: Not Supported 00:11:28.947 Directives: Supported 00:11:28.947 NVMe-MI: Not Supported 00:11:28.947 Virtualization Management: Not Supported 00:11:28.947 Doorbell Buffer Config: Supported 00:11:28.947 Get LBA Status Capability: Not Supported 00:11:28.947 Command & Feature Lockdown Capability: Not Supported 00:11:28.947 Abort Command Limit: 4 00:11:28.947 Async Event Request Limit: 4 00:11:28.947 Number of Firmware Slots: N/A 00:11:28.947 Firmware Slot 1 Read-Only: N/A 00:11:28.947 Firmware Activation Without Reset: N/A 00:11:28.947 Multiple Update Detection Support: N/A 00:11:28.947 Firmware Update Granularity: No Information Provided 00:11:28.947 Per-Namespace SMART Log: Yes 00:11:28.947 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.947 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:28.947 Command Effects Log Page: Supported 00:11:28.947 Get Log Page Extended Data: Supported 00:11:28.947 Telemetry Log Pages: Not Supported 00:11:28.947 Persistent Event Log Pages: Not Supported 00:11:28.947 Supported Log Pages Log Page: May Support 00:11:28.947 Commands Supported & Effects Log Page: Not Supported 00:11:28.947 Feature Identifiers & Effects Log Page:May Support 00:11:28.947 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.947 Data Area 4 for Telemetry Log: Not Supported 00:11:28.947 Error Log Page Entries Supported: 1 00:11:28.947 Keep Alive: Not Supported 00:11:28.947 00:11:28.947 NVM Command Set Attributes 00:11:28.947 ========================== 00:11:28.947 Submission Queue Entry Size 00:11:28.947 Max: 64 00:11:28.947 Min: 64 00:11:28.947 Completion Queue Entry Size 00:11:28.947 Max: 16 00:11:28.947 Min: 16 00:11:28.947 Number of Namespaces: 256 00:11:28.947 Compare Command: Supported 00:11:28.947 Write Uncorrectable Command: Not Supported 00:11:28.947 Dataset Management Command: Supported 00:11:28.947 Write Zeroes Command: Supported 00:11:28.947 Set Features Save Field: Supported 00:11:28.948 Reservations: Not Supported 00:11:28.948 Timestamp: Supported 00:11:28.948 Copy: Supported 00:11:28.948 Volatile Write Cache: Present 00:11:28.948 Atomic Write Unit (Normal): 1 00:11:28.948 Atomic Write Unit (PFail): 1 00:11:28.948 Atomic Compare & Write Unit: 1 00:11:28.948 Fused Compare & Write: Not Supported 00:11:28.948 Scatter-Gather List 00:11:28.948 SGL Command Set: Supported 00:11:28.948 SGL Keyed: Not Supported 00:11:28.948 SGL Bit Bucket Descriptor: Not Supported 00:11:28.948 SGL Metadata Pointer: Not Supported 00:11:28.948 Oversized SGL: Not Supported 00:11:28.948 SGL Metadata Address: Not Supported 00:11:28.948 SGL Offset: Not Supported 00:11:28.948 Transport SGL Data Block: Not Supported 00:11:28.948 Replay Protected Memory Block: Not Supported 00:11:28.948 00:11:28.948 Firmware Slot Information 00:11:28.948 ========================= 00:11:28.948 Active slot: 1 00:11:28.948 Slot 1 Firmware Revision: 1.0 00:11:28.948 00:11:28.948 00:11:28.948 Commands Supported and Effects 00:11:28.948 ============================== 00:11:28.948 Admin Commands 00:11:28.948 -------------- 00:11:28.948 Delete I/O Submission Queue (00h): Supported 00:11:28.948 Create I/O Submission Queue (01h): Supported 00:11:28.948 Get Log Page (02h): Supported 00:11:28.948 Delete I/O Completion Queue (04h): Supported 00:11:28.948 Create I/O Completion Queue (05h): Supported 00:11:28.948 Identify (06h): Supported 00:11:28.948 Abort (08h): Supported 00:11:28.948 Set Features (09h): Supported 00:11:28.948 Get Features (0Ah): Supported 00:11:28.948 Asynchronous Event Request (0Ch): Supported 00:11:28.948 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:28.948 Directive Send (19h): Supported 00:11:28.948 Directive Receive (1Ah): Supported 00:11:28.948 Virtualization Management (1Ch): Supported 00:11:28.948 Doorbell Buffer Config (7Ch): Supported 00:11:28.948 Format NVM (80h): Supported LBA-Change 00:11:28.948 I/O Commands 00:11:28.948 ------------ 00:11:28.948 Flush (00h): Supported LBA-Change 00:11:28.948 Write (01h): Supported LBA-Change 00:11:28.948 Read (02h): Supported 00:11:28.948 Compare (05h): Supported 00:11:28.948 Write Zeroes (08h): Supported LBA-Change 00:11:28.948 Dataset Management (09h): Supported LBA-Change 00:11:28.948 Unknown (0Ch): Supported 00:11:28.948 Unknown (12h): Supported 00:11:28.948 Copy (19h): Supported LBA-Change 00:11:28.948 Unknown (1Dh): Supported LBA-Change 00:11:28.948 00:11:28.948 Error Log 00:11:28.948 ========= 00:11:28.948 00:11:28.948 Arbitration 00:11:28.948 =========== 00:11:28.948 Arbitration Burst: no limit 00:11:28.948 00:11:28.948 Power Management 00:11:28.948 ================ 00:11:28.948 Number of Power States: 1 00:11:28.948 Current Power State: Power State #0 00:11:28.948 Power State #0: 00:11:28.948 Max Power: 25.00 W 00:11:28.948 Non-Operational State: Operational 00:11:28.948 Entry Latency: 16 microseconds 00:11:28.948 Exit Latency: 4 microseconds 00:11:28.948 Relative Read Throughput: 0 00:11:28.948 Relative Read Latency: 0 00:11:28.948 Relative Write Throughput: 0 00:11:28.948 Relative Write Latency: 0 00:11:28.948 Idle Power: Not Reported 00:11:28.948 Active Power: Not Reported 00:11:28.948 Non-Operational Permissive Mode: Not Supported 00:11:28.948 00:11:28.948 Health Information 00:11:28.948 ================== 00:11:28.948 Critical Warnings: 00:11:28.948 Available Spare Space: OK 00:11:28.948 Temperature: OK 00:11:28.948 Device Reliability: OK 00:11:28.948 Read Only: No 00:11:28.948 Volatile Memory Backup: OK 00:11:28.948 Current Temperature: 323 Kelvin (50 Celsius) 00:11:28.948 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:28.948 Available Spare: 0% 00:11:28.948 Available Spare Threshold: 0% 00:11:28.948 Life Percentage Used: 0% 00:11:28.948 Data Units Read: 2123 00:11:28.948 Data Units Written: 1803 00:11:28.948 Host Read Commands: 101826 00:11:28.948 Host Write Commands: 97596 00:11:28.948 Controller Busy Time: 0 minutes 00:11:28.948 Power Cycles: 0 00:11:28.948 Power On Hours: 0 hours 00:11:28.948 Unsafe Shutdowns: 0 00:11:28.948 Unrecoverable Media Errors: 0 00:11:28.948 Lifetime Error Log Entries: 0 00:11:28.948 Warning Temperature Time: 0 minutes 00:11:28.948 Critical Temperature Time: 0 minutes 00:11:28.948 00:11:28.948 Number of Queues 00:11:28.948 ================ 00:11:28.948 Number of I/O Submission Queues: 64 00:11:28.948 Number of I/O Completion Queues: 64 00:11:28.948 00:11:28.948 ZNS Specific Controller Data 00:11:28.948 ============================ 00:11:28.948 Zone Append Size Limit: 0 00:11:28.948 00:11:28.948 00:11:28.948 Active Namespaces 00:11:28.948 ================= 00:11:28.948 Namespace ID:1 00:11:28.948 Error Recovery Timeout: Unlimited 00:11:28.948 Command Set Identifier: NVM (00h) 00:11:28.948 Deallocate: Supported 00:11:28.948 Deallocated/Unwritten Error: Supported 00:11:28.948 Deallocated Read Value: All 0x00 00:11:28.948 Deallocate in Write Zeroes: Not Supported 00:11:28.948 Deallocated Guard Field: 0xFFFF 00:11:28.948 Flush: Supported 00:11:28.948 Reservation: Not Supported 00:11:28.948 Namespace Sharing Capabilities: Private 00:11:28.948 Size (in LBAs): 1048576 (4GiB) 00:11:28.948 Capacity (in LBAs): 1048576 (4GiB) 00:11:28.948 Utilization (in LBAs): 1048576 (4GiB) 00:11:28.948 Thin Provisioning: Not Supported 00:11:28.948 Per-NS Atomic Units: No 00:11:28.948 Maximum Single Source Range Length: 128 00:11:28.948 Maximum Copy Length: 128 00:11:28.948 Maximum Source Range Count: 128 00:11:28.948 NGUID/EUI64 Never Reused: No 00:11:28.948 Namespace Write Protected: No 00:11:28.948 Number of LBA Formats: 8 00:11:28.948 Current LBA Format: LBA Format #04 00:11:28.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.948 00:11:28.948 NVM Specific Namespace Data 00:11:28.948 =========================== 00:11:28.948 Logical Block Storage Tag Mask: 0 00:11:28.948 Protection Information Capabilities: 00:11:28.948 16b Guard Protection Information Storage Tag Support: No 00:11:28.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.948 Storage Tag Check Read Support: No 00:11:28.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Namespace ID:2 00:11:28.948 Error Recovery Timeout: Unlimited 00:11:28.948 Command Set Identifier: NVM (00h) 00:11:28.948 Deallocate: Supported 00:11:28.948 Deallocated/Unwritten Error: Supported 00:11:28.948 Deallocated Read Value: All 0x00 00:11:28.948 Deallocate in Write Zeroes: Not Supported 00:11:28.948 Deallocated Guard Field: 0xFFFF 00:11:28.948 Flush: Supported 00:11:28.948 Reservation: Not Supported 00:11:28.948 Namespace Sharing Capabilities: Private 00:11:28.948 Size (in LBAs): 1048576 (4GiB) 00:11:28.948 Capacity (in LBAs): 1048576 (4GiB) 00:11:28.948 Utilization (in LBAs): 1048576 (4GiB) 00:11:28.948 Thin Provisioning: Not Supported 00:11:28.948 Per-NS Atomic Units: No 00:11:28.948 Maximum Single Source Range Length: 128 00:11:28.948 Maximum Copy Length: 128 00:11:28.948 Maximum Source Range Count: 128 00:11:28.948 NGUID/EUI64 Never Reused: No 00:11:28.948 Namespace Write Protected: No 00:11:28.948 Number of LBA Formats: 8 00:11:28.948 Current LBA Format: LBA Format #04 00:11:28.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.948 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.948 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.948 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.948 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.948 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.948 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.948 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.948 00:11:28.948 NVM Specific Namespace Data 00:11:28.948 =========================== 00:11:28.948 Logical Block Storage Tag Mask: 0 00:11:28.948 Protection Information Capabilities: 00:11:28.948 16b Guard Protection Information Storage Tag Support: No 00:11:28.948 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.948 Storage Tag Check Read Support: No 00:11:28.948 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.948 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Namespace ID:3 00:11:28.949 Error Recovery Timeout: Unlimited 00:11:28.949 Command Set Identifier: NVM (00h) 00:11:28.949 Deallocate: Supported 00:11:28.949 Deallocated/Unwritten Error: Supported 00:11:28.949 Deallocated Read Value: All 0x00 00:11:28.949 Deallocate in Write Zeroes: Not Supported 00:11:28.949 Deallocated Guard Field: 0xFFFF 00:11:28.949 Flush: Supported 00:11:28.949 Reservation: Not Supported 00:11:28.949 Namespace Sharing Capabilities: Private 00:11:28.949 Size (in LBAs): 1048576 (4GiB) 00:11:28.949 Capacity (in LBAs): 1048576 (4GiB) 00:11:28.949 Utilization (in LBAs): 1048576 (4GiB) 00:11:28.949 Thin Provisioning: Not Supported 00:11:28.949 Per-NS Atomic Units: No 00:11:28.949 Maximum Single Source Range Length: 128 00:11:28.949 Maximum Copy Length: 128 00:11:28.949 Maximum Source Range Count: 128 00:11:28.949 NGUID/EUI64 Never Reused: No 00:11:28.949 Namespace Write Protected: No 00:11:28.949 Number of LBA Formats: 8 00:11:28.949 Current LBA Format: LBA Format #04 00:11:28.949 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.949 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:28.949 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:28.949 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:28.949 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:28.949 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:28.949 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:28.949 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:28.949 00:11:28.949 NVM Specific Namespace Data 00:11:28.949 =========================== 00:11:28.949 Logical Block Storage Tag Mask: 0 00:11:28.949 Protection Information Capabilities: 00:11:28.949 16b Guard Protection Information Storage Tag Support: No 00:11:28.949 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:28.949 Storage Tag Check Read Support: No 00:11:28.949 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:28.949 09:17:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:28.949 09:17:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:29.208 ===================================================== 00:11:29.208 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:29.208 ===================================================== 00:11:29.208 Controller Capabilities/Features 00:11:29.208 ================================ 00:11:29.208 Vendor ID: 1b36 00:11:29.208 Subsystem Vendor ID: 1af4 00:11:29.208 Serial Number: 12340 00:11:29.208 Model Number: QEMU NVMe Ctrl 00:11:29.208 Firmware Version: 8.0.0 00:11:29.208 Recommended Arb Burst: 6 00:11:29.208 IEEE OUI Identifier: 00 54 52 00:11:29.208 Multi-path I/O 00:11:29.208 May have multiple subsystem ports: No 00:11:29.208 May have multiple controllers: No 00:11:29.208 Associated with SR-IOV VF: No 00:11:29.208 Max Data Transfer Size: 524288 00:11:29.208 Max Number of Namespaces: 256 00:11:29.208 Max Number of I/O Queues: 64 00:11:29.208 NVMe Specification Version (VS): 1.4 00:11:29.208 NVMe Specification Version (Identify): 1.4 00:11:29.208 Maximum Queue Entries: 2048 00:11:29.208 Contiguous Queues Required: Yes 00:11:29.208 Arbitration Mechanisms Supported 00:11:29.208 Weighted Round Robin: Not Supported 00:11:29.208 Vendor Specific: Not Supported 00:11:29.208 Reset Timeout: 7500 ms 00:11:29.208 Doorbell Stride: 4 bytes 00:11:29.208 NVM Subsystem Reset: Not Supported 00:11:29.208 Command Sets Supported 00:11:29.208 NVM Command Set: Supported 00:11:29.208 Boot Partition: Not Supported 00:11:29.208 Memory Page Size Minimum: 4096 bytes 00:11:29.208 Memory Page Size Maximum: 65536 bytes 00:11:29.208 Persistent Memory Region: Not Supported 00:11:29.208 Optional Asynchronous Events Supported 00:11:29.208 Namespace Attribute Notices: Supported 00:11:29.208 Firmware Activation Notices: Not Supported 00:11:29.208 ANA Change Notices: Not Supported 00:11:29.208 PLE Aggregate Log Change Notices: Not Supported 00:11:29.208 LBA Status Info Alert Notices: Not Supported 00:11:29.208 EGE Aggregate Log Change Notices: Not Supported 00:11:29.208 Normal NVM Subsystem Shutdown event: Not Supported 00:11:29.208 Zone Descriptor Change Notices: Not Supported 00:11:29.208 Discovery Log Change Notices: Not Supported 00:11:29.208 Controller Attributes 00:11:29.208 128-bit Host Identifier: Not Supported 00:11:29.208 Non-Operational Permissive Mode: Not Supported 00:11:29.208 NVM Sets: Not Supported 00:11:29.208 Read Recovery Levels: Not Supported 00:11:29.208 Endurance Groups: Not Supported 00:11:29.208 Predictable Latency Mode: Not Supported 00:11:29.208 Traffic Based Keep ALive: Not Supported 00:11:29.208 Namespace Granularity: Not Supported 00:11:29.208 SQ Associations: Not Supported 00:11:29.208 UUID List: Not Supported 00:11:29.208 Multi-Domain Subsystem: Not Supported 00:11:29.208 Fixed Capacity Management: Not Supported 00:11:29.208 Variable Capacity Management: Not Supported 00:11:29.208 Delete Endurance Group: Not Supported 00:11:29.208 Delete NVM Set: Not Supported 00:11:29.208 Extended LBA Formats Supported: Supported 00:11:29.208 Flexible Data Placement Supported: Not Supported 00:11:29.208 00:11:29.208 Controller Memory Buffer Support 00:11:29.208 ================================ 00:11:29.208 Supported: No 00:11:29.208 00:11:29.208 Persistent Memory Region Support 00:11:29.208 ================================ 00:11:29.208 Supported: No 00:11:29.208 00:11:29.209 Admin Command Set Attributes 00:11:29.209 ============================ 00:11:29.209 Security Send/Receive: Not Supported 00:11:29.209 Format NVM: Supported 00:11:29.209 Firmware Activate/Download: Not Supported 00:11:29.209 Namespace Management: Supported 00:11:29.209 Device Self-Test: Not Supported 00:11:29.209 Directives: Supported 00:11:29.209 NVMe-MI: Not Supported 00:11:29.209 Virtualization Management: Not Supported 00:11:29.209 Doorbell Buffer Config: Supported 00:11:29.209 Get LBA Status Capability: Not Supported 00:11:29.209 Command & Feature Lockdown Capability: Not Supported 00:11:29.209 Abort Command Limit: 4 00:11:29.209 Async Event Request Limit: 4 00:11:29.209 Number of Firmware Slots: N/A 00:11:29.209 Firmware Slot 1 Read-Only: N/A 00:11:29.209 Firmware Activation Without Reset: N/A 00:11:29.209 Multiple Update Detection Support: N/A 00:11:29.209 Firmware Update Granularity: No Information Provided 00:11:29.209 Per-Namespace SMART Log: Yes 00:11:29.209 Asymmetric Namespace Access Log Page: Not Supported 00:11:29.209 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:29.209 Command Effects Log Page: Supported 00:11:29.209 Get Log Page Extended Data: Supported 00:11:29.209 Telemetry Log Pages: Not Supported 00:11:29.209 Persistent Event Log Pages: Not Supported 00:11:29.209 Supported Log Pages Log Page: May Support 00:11:29.209 Commands Supported & Effects Log Page: Not Supported 00:11:29.209 Feature Identifiers & Effects Log Page:May Support 00:11:29.209 NVMe-MI Commands & Effects Log Page: May Support 00:11:29.209 Data Area 4 for Telemetry Log: Not Supported 00:11:29.209 Error Log Page Entries Supported: 1 00:11:29.209 Keep Alive: Not Supported 00:11:29.209 00:11:29.209 NVM Command Set Attributes 00:11:29.209 ========================== 00:11:29.209 Submission Queue Entry Size 00:11:29.209 Max: 64 00:11:29.209 Min: 64 00:11:29.209 Completion Queue Entry Size 00:11:29.209 Max: 16 00:11:29.209 Min: 16 00:11:29.209 Number of Namespaces: 256 00:11:29.209 Compare Command: Supported 00:11:29.209 Write Uncorrectable Command: Not Supported 00:11:29.209 Dataset Management Command: Supported 00:11:29.209 Write Zeroes Command: Supported 00:11:29.209 Set Features Save Field: Supported 00:11:29.209 Reservations: Not Supported 00:11:29.209 Timestamp: Supported 00:11:29.209 Copy: Supported 00:11:29.209 Volatile Write Cache: Present 00:11:29.209 Atomic Write Unit (Normal): 1 00:11:29.209 Atomic Write Unit (PFail): 1 00:11:29.209 Atomic Compare & Write Unit: 1 00:11:29.209 Fused Compare & Write: Not Supported 00:11:29.209 Scatter-Gather List 00:11:29.209 SGL Command Set: Supported 00:11:29.209 SGL Keyed: Not Supported 00:11:29.209 SGL Bit Bucket Descriptor: Not Supported 00:11:29.209 SGL Metadata Pointer: Not Supported 00:11:29.209 Oversized SGL: Not Supported 00:11:29.209 SGL Metadata Address: Not Supported 00:11:29.209 SGL Offset: Not Supported 00:11:29.209 Transport SGL Data Block: Not Supported 00:11:29.209 Replay Protected Memory Block: Not Supported 00:11:29.209 00:11:29.209 Firmware Slot Information 00:11:29.209 ========================= 00:11:29.209 Active slot: 1 00:11:29.209 Slot 1 Firmware Revision: 1.0 00:11:29.209 00:11:29.209 00:11:29.209 Commands Supported and Effects 00:11:29.209 ============================== 00:11:29.209 Admin Commands 00:11:29.209 -------------- 00:11:29.209 Delete I/O Submission Queue (00h): Supported 00:11:29.209 Create I/O Submission Queue (01h): Supported 00:11:29.209 Get Log Page (02h): Supported 00:11:29.209 Delete I/O Completion Queue (04h): Supported 00:11:29.209 Create I/O Completion Queue (05h): Supported 00:11:29.209 Identify (06h): Supported 00:11:29.209 Abort (08h): Supported 00:11:29.209 Set Features (09h): Supported 00:11:29.209 Get Features (0Ah): Supported 00:11:29.209 Asynchronous Event Request (0Ch): Supported 00:11:29.209 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:29.209 Directive Send (19h): Supported 00:11:29.209 Directive Receive (1Ah): Supported 00:11:29.209 Virtualization Management (1Ch): Supported 00:11:29.209 Doorbell Buffer Config (7Ch): Supported 00:11:29.209 Format NVM (80h): Supported LBA-Change 00:11:29.209 I/O Commands 00:11:29.209 ------------ 00:11:29.209 Flush (00h): Supported LBA-Change 00:11:29.209 Write (01h): Supported LBA-Change 00:11:29.209 Read (02h): Supported 00:11:29.209 Compare (05h): Supported 00:11:29.209 Write Zeroes (08h): Supported LBA-Change 00:11:29.209 Dataset Management (09h): Supported LBA-Change 00:11:29.209 Unknown (0Ch): Supported 00:11:29.209 Unknown (12h): Supported 00:11:29.209 Copy (19h): Supported LBA-Change 00:11:29.209 Unknown (1Dh): Supported LBA-Change 00:11:29.209 00:11:29.209 Error Log 00:11:29.209 ========= 00:11:29.209 00:11:29.209 Arbitration 00:11:29.209 =========== 00:11:29.209 Arbitration Burst: no limit 00:11:29.209 00:11:29.209 Power Management 00:11:29.209 ================ 00:11:29.209 Number of Power States: 1 00:11:29.209 Current Power State: Power State #0 00:11:29.209 Power State #0: 00:11:29.209 Max Power: 25.00 W 00:11:29.209 Non-Operational State: Operational 00:11:29.209 Entry Latency: 16 microseconds 00:11:29.209 Exit Latency: 4 microseconds 00:11:29.209 Relative Read Throughput: 0 00:11:29.209 Relative Read Latency: 0 00:11:29.209 Relative Write Throughput: 0 00:11:29.209 Relative Write Latency: 0 00:11:29.209 Idle Power: Not Reported 00:11:29.209 Active Power: Not Reported 00:11:29.209 Non-Operational Permissive Mode: Not Supported 00:11:29.209 00:11:29.209 Health Information 00:11:29.209 ================== 00:11:29.209 Critical Warnings: 00:11:29.209 Available Spare Space: OK 00:11:29.209 Temperature: OK 00:11:29.209 Device Reliability: OK 00:11:29.209 Read Only: No 00:11:29.209 Volatile Memory Backup: OK 00:11:29.209 Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.209 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:29.209 Available Spare: 0% 00:11:29.209 Available Spare Threshold: 0% 00:11:29.209 Life Percentage Used: 0% 00:11:29.209 Data Units Read: 995 00:11:29.209 Data Units Written: 834 00:11:29.209 Host Read Commands: 48848 00:11:29.209 Host Write Commands: 47441 00:11:29.209 Controller Busy Time: 0 minutes 00:11:29.209 Power Cycles: 0 00:11:29.209 Power On Hours: 0 hours 00:11:29.209 Unsafe Shutdowns: 0 00:11:29.209 Unrecoverable Media Errors: 0 00:11:29.209 Lifetime Error Log Entries: 0 00:11:29.209 Warning Temperature Time: 0 minutes 00:11:29.209 Critical Temperature Time: 0 minutes 00:11:29.209 00:11:29.209 Number of Queues 00:11:29.209 ================ 00:11:29.209 Number of I/O Submission Queues: 64 00:11:29.209 Number of I/O Completion Queues: 64 00:11:29.209 00:11:29.209 ZNS Specific Controller Data 00:11:29.209 ============================ 00:11:29.209 Zone Append Size Limit: 0 00:11:29.209 00:11:29.209 00:11:29.209 Active Namespaces 00:11:29.209 ================= 00:11:29.209 Namespace ID:1 00:11:29.209 Error Recovery Timeout: Unlimited 00:11:29.209 Command Set Identifier: NVM (00h) 00:11:29.209 Deallocate: Supported 00:11:29.209 Deallocated/Unwritten Error: Supported 00:11:29.209 Deallocated Read Value: All 0x00 00:11:29.209 Deallocate in Write Zeroes: Not Supported 00:11:29.209 Deallocated Guard Field: 0xFFFF 00:11:29.209 Flush: Supported 00:11:29.209 Reservation: Not Supported 00:11:29.209 Metadata Transferred as: Separate Metadata Buffer 00:11:29.209 Namespace Sharing Capabilities: Private 00:11:29.209 Size (in LBAs): 1548666 (5GiB) 00:11:29.209 Capacity (in LBAs): 1548666 (5GiB) 00:11:29.209 Utilization (in LBAs): 1548666 (5GiB) 00:11:29.209 Thin Provisioning: Not Supported 00:11:29.209 Per-NS Atomic Units: No 00:11:29.209 Maximum Single Source Range Length: 128 00:11:29.209 Maximum Copy Length: 128 00:11:29.209 Maximum Source Range Count: 128 00:11:29.209 NGUID/EUI64 Never Reused: No 00:11:29.209 Namespace Write Protected: No 00:11:29.209 Number of LBA Formats: 8 00:11:29.209 Current LBA Format: LBA Format #07 00:11:29.209 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.209 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.209 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.209 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.209 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.209 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.209 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.209 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.209 00:11:29.209 NVM Specific Namespace Data 00:11:29.209 =========================== 00:11:29.209 Logical Block Storage Tag Mask: 0 00:11:29.209 Protection Information Capabilities: 00:11:29.209 16b Guard Protection Information Storage Tag Support: No 00:11:29.209 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.209 Storage Tag Check Read Support: No 00:11:29.209 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.209 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.209 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.209 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.209 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.210 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.210 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.210 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.210 09:17:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:29.210 09:17:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:29.468 ===================================================== 00:11:29.468 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:29.468 ===================================================== 00:11:29.468 Controller Capabilities/Features 00:11:29.468 ================================ 00:11:29.468 Vendor ID: 1b36 00:11:29.468 Subsystem Vendor ID: 1af4 00:11:29.468 Serial Number: 12341 00:11:29.468 Model Number: QEMU NVMe Ctrl 00:11:29.468 Firmware Version: 8.0.0 00:11:29.468 Recommended Arb Burst: 6 00:11:29.468 IEEE OUI Identifier: 00 54 52 00:11:29.468 Multi-path I/O 00:11:29.468 May have multiple subsystem ports: No 00:11:29.468 May have multiple controllers: No 00:11:29.468 Associated with SR-IOV VF: No 00:11:29.468 Max Data Transfer Size: 524288 00:11:29.468 Max Number of Namespaces: 256 00:11:29.468 Max Number of I/O Queues: 64 00:11:29.468 NVMe Specification Version (VS): 1.4 00:11:29.468 NVMe Specification Version (Identify): 1.4 00:11:29.468 Maximum Queue Entries: 2048 00:11:29.468 Contiguous Queues Required: Yes 00:11:29.468 Arbitration Mechanisms Supported 00:11:29.469 Weighted Round Robin: Not Supported 00:11:29.469 Vendor Specific: Not Supported 00:11:29.469 Reset Timeout: 7500 ms 00:11:29.469 Doorbell Stride: 4 bytes 00:11:29.469 NVM Subsystem Reset: Not Supported 00:11:29.469 Command Sets Supported 00:11:29.469 NVM Command Set: Supported 00:11:29.469 Boot Partition: Not Supported 00:11:29.469 Memory Page Size Minimum: 4096 bytes 00:11:29.469 Memory Page Size Maximum: 65536 bytes 00:11:29.469 Persistent Memory Region: Not Supported 00:11:29.469 Optional Asynchronous Events Supported 00:11:29.469 Namespace Attribute Notices: Supported 00:11:29.469 Firmware Activation Notices: Not Supported 00:11:29.469 ANA Change Notices: Not Supported 00:11:29.469 PLE Aggregate Log Change Notices: Not Supported 00:11:29.469 LBA Status Info Alert Notices: Not Supported 00:11:29.469 EGE Aggregate Log Change Notices: Not Supported 00:11:29.469 Normal NVM Subsystem Shutdown event: Not Supported 00:11:29.469 Zone Descriptor Change Notices: Not Supported 00:11:29.469 Discovery Log Change Notices: Not Supported 00:11:29.469 Controller Attributes 00:11:29.469 128-bit Host Identifier: Not Supported 00:11:29.469 Non-Operational Permissive Mode: Not Supported 00:11:29.469 NVM Sets: Not Supported 00:11:29.469 Read Recovery Levels: Not Supported 00:11:29.469 Endurance Groups: Not Supported 00:11:29.469 Predictable Latency Mode: Not Supported 00:11:29.469 Traffic Based Keep ALive: Not Supported 00:11:29.469 Namespace Granularity: Not Supported 00:11:29.469 SQ Associations: Not Supported 00:11:29.469 UUID List: Not Supported 00:11:29.469 Multi-Domain Subsystem: Not Supported 00:11:29.469 Fixed Capacity Management: Not Supported 00:11:29.469 Variable Capacity Management: Not Supported 00:11:29.469 Delete Endurance Group: Not Supported 00:11:29.469 Delete NVM Set: Not Supported 00:11:29.469 Extended LBA Formats Supported: Supported 00:11:29.469 Flexible Data Placement Supported: Not Supported 00:11:29.469 00:11:29.469 Controller Memory Buffer Support 00:11:29.469 ================================ 00:11:29.469 Supported: No 00:11:29.469 00:11:29.469 Persistent Memory Region Support 00:11:29.469 ================================ 00:11:29.469 Supported: No 00:11:29.469 00:11:29.469 Admin Command Set Attributes 00:11:29.469 ============================ 00:11:29.469 Security Send/Receive: Not Supported 00:11:29.469 Format NVM: Supported 00:11:29.469 Firmware Activate/Download: Not Supported 00:11:29.469 Namespace Management: Supported 00:11:29.469 Device Self-Test: Not Supported 00:11:29.469 Directives: Supported 00:11:29.469 NVMe-MI: Not Supported 00:11:29.469 Virtualization Management: Not Supported 00:11:29.469 Doorbell Buffer Config: Supported 00:11:29.469 Get LBA Status Capability: Not Supported 00:11:29.469 Command & Feature Lockdown Capability: Not Supported 00:11:29.469 Abort Command Limit: 4 00:11:29.469 Async Event Request Limit: 4 00:11:29.469 Number of Firmware Slots: N/A 00:11:29.469 Firmware Slot 1 Read-Only: N/A 00:11:29.469 Firmware Activation Without Reset: N/A 00:11:29.469 Multiple Update Detection Support: N/A 00:11:29.469 Firmware Update Granularity: No Information Provided 00:11:29.469 Per-Namespace SMART Log: Yes 00:11:29.469 Asymmetric Namespace Access Log Page: Not Supported 00:11:29.469 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:29.469 Command Effects Log Page: Supported 00:11:29.469 Get Log Page Extended Data: Supported 00:11:29.469 Telemetry Log Pages: Not Supported 00:11:29.469 Persistent Event Log Pages: Not Supported 00:11:29.469 Supported Log Pages Log Page: May Support 00:11:29.469 Commands Supported & Effects Log Page: Not Supported 00:11:29.469 Feature Identifiers & Effects Log Page:May Support 00:11:29.469 NVMe-MI Commands & Effects Log Page: May Support 00:11:29.469 Data Area 4 for Telemetry Log: Not Supported 00:11:29.469 Error Log Page Entries Supported: 1 00:11:29.469 Keep Alive: Not Supported 00:11:29.469 00:11:29.469 NVM Command Set Attributes 00:11:29.469 ========================== 00:11:29.469 Submission Queue Entry Size 00:11:29.469 Max: 64 00:11:29.469 Min: 64 00:11:29.469 Completion Queue Entry Size 00:11:29.469 Max: 16 00:11:29.469 Min: 16 00:11:29.469 Number of Namespaces: 256 00:11:29.469 Compare Command: Supported 00:11:29.469 Write Uncorrectable Command: Not Supported 00:11:29.469 Dataset Management Command: Supported 00:11:29.469 Write Zeroes Command: Supported 00:11:29.469 Set Features Save Field: Supported 00:11:29.469 Reservations: Not Supported 00:11:29.469 Timestamp: Supported 00:11:29.469 Copy: Supported 00:11:29.469 Volatile Write Cache: Present 00:11:29.469 Atomic Write Unit (Normal): 1 00:11:29.469 Atomic Write Unit (PFail): 1 00:11:29.469 Atomic Compare & Write Unit: 1 00:11:29.469 Fused Compare & Write: Not Supported 00:11:29.469 Scatter-Gather List 00:11:29.469 SGL Command Set: Supported 00:11:29.469 SGL Keyed: Not Supported 00:11:29.469 SGL Bit Bucket Descriptor: Not Supported 00:11:29.469 SGL Metadata Pointer: Not Supported 00:11:29.469 Oversized SGL: Not Supported 00:11:29.469 SGL Metadata Address: Not Supported 00:11:29.469 SGL Offset: Not Supported 00:11:29.469 Transport SGL Data Block: Not Supported 00:11:29.469 Replay Protected Memory Block: Not Supported 00:11:29.469 00:11:29.469 Firmware Slot Information 00:11:29.469 ========================= 00:11:29.469 Active slot: 1 00:11:29.469 Slot 1 Firmware Revision: 1.0 00:11:29.469 00:11:29.469 00:11:29.469 Commands Supported and Effects 00:11:29.469 ============================== 00:11:29.469 Admin Commands 00:11:29.469 -------------- 00:11:29.469 Delete I/O Submission Queue (00h): Supported 00:11:29.469 Create I/O Submission Queue (01h): Supported 00:11:29.469 Get Log Page (02h): Supported 00:11:29.469 Delete I/O Completion Queue (04h): Supported 00:11:29.469 Create I/O Completion Queue (05h): Supported 00:11:29.469 Identify (06h): Supported 00:11:29.469 Abort (08h): Supported 00:11:29.469 Set Features (09h): Supported 00:11:29.469 Get Features (0Ah): Supported 00:11:29.469 Asynchronous Event Request (0Ch): Supported 00:11:29.469 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:29.469 Directive Send (19h): Supported 00:11:29.469 Directive Receive (1Ah): Supported 00:11:29.469 Virtualization Management (1Ch): Supported 00:11:29.469 Doorbell Buffer Config (7Ch): Supported 00:11:29.469 Format NVM (80h): Supported LBA-Change 00:11:29.469 I/O Commands 00:11:29.469 ------------ 00:11:29.469 Flush (00h): Supported LBA-Change 00:11:29.469 Write (01h): Supported LBA-Change 00:11:29.470 Read (02h): Supported 00:11:29.470 Compare (05h): Supported 00:11:29.470 Write Zeroes (08h): Supported LBA-Change 00:11:29.470 Dataset Management (09h): Supported LBA-Change 00:11:29.470 Unknown (0Ch): Supported 00:11:29.470 Unknown (12h): Supported 00:11:29.470 Copy (19h): Supported LBA-Change 00:11:29.470 Unknown (1Dh): Supported LBA-Change 00:11:29.470 00:11:29.470 Error Log 00:11:29.470 ========= 00:11:29.470 00:11:29.470 Arbitration 00:11:29.470 =========== 00:11:29.470 Arbitration Burst: no limit 00:11:29.470 00:11:29.470 Power Management 00:11:29.470 ================ 00:11:29.470 Number of Power States: 1 00:11:29.470 Current Power State: Power State #0 00:11:29.470 Power State #0: 00:11:29.470 Max Power: 25.00 W 00:11:29.470 Non-Operational State: Operational 00:11:29.470 Entry Latency: 16 microseconds 00:11:29.470 Exit Latency: 4 microseconds 00:11:29.470 Relative Read Throughput: 0 00:11:29.470 Relative Read Latency: 0 00:11:29.470 Relative Write Throughput: 0 00:11:29.470 Relative Write Latency: 0 00:11:29.728 Idle Power: Not Reported 00:11:29.728 Active Power: Not Reported 00:11:29.728 Non-Operational Permissive Mode: Not Supported 00:11:29.728 00:11:29.728 Health Information 00:11:29.728 ================== 00:11:29.728 Critical Warnings: 00:11:29.728 Available Spare Space: OK 00:11:29.728 Temperature: OK 00:11:29.728 Device Reliability: OK 00:11:29.728 Read Only: No 00:11:29.728 Volatile Memory Backup: OK 00:11:29.728 Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.728 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:29.728 Available Spare: 0% 00:11:29.728 Available Spare Threshold: 0% 00:11:29.728 Life Percentage Used: 0% 00:11:29.728 Data Units Read: 735 00:11:29.728 Data Units Written: 581 00:11:29.728 Host Read Commands: 34776 00:11:29.728 Host Write Commands: 32426 00:11:29.728 Controller Busy Time: 0 minutes 00:11:29.728 Power Cycles: 0 00:11:29.728 Power On Hours: 0 hours 00:11:29.728 Unsafe Shutdowns: 0 00:11:29.728 Unrecoverable Media Errors: 0 00:11:29.728 Lifetime Error Log Entries: 0 00:11:29.728 Warning Temperature Time: 0 minutes 00:11:29.728 Critical Temperature Time: 0 minutes 00:11:29.728 00:11:29.728 Number of Queues 00:11:29.728 ================ 00:11:29.728 Number of I/O Submission Queues: 64 00:11:29.728 Number of I/O Completion Queues: 64 00:11:29.728 00:11:29.729 ZNS Specific Controller Data 00:11:29.729 ============================ 00:11:29.729 Zone Append Size Limit: 0 00:11:29.729 00:11:29.729 00:11:29.729 Active Namespaces 00:11:29.729 ================= 00:11:29.729 Namespace ID:1 00:11:29.729 Error Recovery Timeout: Unlimited 00:11:29.729 Command Set Identifier: NVM (00h) 00:11:29.729 Deallocate: Supported 00:11:29.729 Deallocated/Unwritten Error: Supported 00:11:29.729 Deallocated Read Value: All 0x00 00:11:29.729 Deallocate in Write Zeroes: Not Supported 00:11:29.729 Deallocated Guard Field: 0xFFFF 00:11:29.729 Flush: Supported 00:11:29.729 Reservation: Not Supported 00:11:29.729 Namespace Sharing Capabilities: Private 00:11:29.729 Size (in LBAs): 1310720 (5GiB) 00:11:29.729 Capacity (in LBAs): 1310720 (5GiB) 00:11:29.729 Utilization (in LBAs): 1310720 (5GiB) 00:11:29.729 Thin Provisioning: Not Supported 00:11:29.729 Per-NS Atomic Units: No 00:11:29.729 Maximum Single Source Range Length: 128 00:11:29.729 Maximum Copy Length: 128 00:11:29.729 Maximum Source Range Count: 128 00:11:29.729 NGUID/EUI64 Never Reused: No 00:11:29.729 Namespace Write Protected: No 00:11:29.729 Number of LBA Formats: 8 00:11:29.729 Current LBA Format: LBA Format #04 00:11:29.729 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.729 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.729 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.729 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.729 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.729 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.729 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.729 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.729 00:11:29.729 NVM Specific Namespace Data 00:11:29.729 =========================== 00:11:29.729 Logical Block Storage Tag Mask: 0 00:11:29.729 Protection Information Capabilities: 00:11:29.729 16b Guard Protection Information Storage Tag Support: No 00:11:29.729 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.729 Storage Tag Check Read Support: No 00:11:29.729 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.729 09:17:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:29.729 09:17:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:29.988 ===================================================== 00:11:29.988 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:29.988 ===================================================== 00:11:29.988 Controller Capabilities/Features 00:11:29.988 ================================ 00:11:29.988 Vendor ID: 1b36 00:11:29.988 Subsystem Vendor ID: 1af4 00:11:29.988 Serial Number: 12342 00:11:29.988 Model Number: QEMU NVMe Ctrl 00:11:29.988 Firmware Version: 8.0.0 00:11:29.988 Recommended Arb Burst: 6 00:11:29.988 IEEE OUI Identifier: 00 54 52 00:11:29.988 Multi-path I/O 00:11:29.988 May have multiple subsystem ports: No 00:11:29.988 May have multiple controllers: No 00:11:29.988 Associated with SR-IOV VF: No 00:11:29.988 Max Data Transfer Size: 524288 00:11:29.988 Max Number of Namespaces: 256 00:11:29.988 Max Number of I/O Queues: 64 00:11:29.988 NVMe Specification Version (VS): 1.4 00:11:29.988 NVMe Specification Version (Identify): 1.4 00:11:29.988 Maximum Queue Entries: 2048 00:11:29.988 Contiguous Queues Required: Yes 00:11:29.988 Arbitration Mechanisms Supported 00:11:29.988 Weighted Round Robin: Not Supported 00:11:29.988 Vendor Specific: Not Supported 00:11:29.988 Reset Timeout: 7500 ms 00:11:29.988 Doorbell Stride: 4 bytes 00:11:29.988 NVM Subsystem Reset: Not Supported 00:11:29.988 Command Sets Supported 00:11:29.988 NVM Command Set: Supported 00:11:29.988 Boot Partition: Not Supported 00:11:29.988 Memory Page Size Minimum: 4096 bytes 00:11:29.988 Memory Page Size Maximum: 65536 bytes 00:11:29.988 Persistent Memory Region: Not Supported 00:11:29.988 Optional Asynchronous Events Supported 00:11:29.988 Namespace Attribute Notices: Supported 00:11:29.988 Firmware Activation Notices: Not Supported 00:11:29.988 ANA Change Notices: Not Supported 00:11:29.988 PLE Aggregate Log Change Notices: Not Supported 00:11:29.988 LBA Status Info Alert Notices: Not Supported 00:11:29.988 EGE Aggregate Log Change Notices: Not Supported 00:11:29.988 Normal NVM Subsystem Shutdown event: Not Supported 00:11:29.988 Zone Descriptor Change Notices: Not Supported 00:11:29.988 Discovery Log Change Notices: Not Supported 00:11:29.988 Controller Attributes 00:11:29.988 128-bit Host Identifier: Not Supported 00:11:29.988 Non-Operational Permissive Mode: Not Supported 00:11:29.988 NVM Sets: Not Supported 00:11:29.988 Read Recovery Levels: Not Supported 00:11:29.988 Endurance Groups: Not Supported 00:11:29.989 Predictable Latency Mode: Not Supported 00:11:29.989 Traffic Based Keep ALive: Not Supported 00:11:29.989 Namespace Granularity: Not Supported 00:11:29.989 SQ Associations: Not Supported 00:11:29.989 UUID List: Not Supported 00:11:29.989 Multi-Domain Subsystem: Not Supported 00:11:29.989 Fixed Capacity Management: Not Supported 00:11:29.989 Variable Capacity Management: Not Supported 00:11:29.989 Delete Endurance Group: Not Supported 00:11:29.989 Delete NVM Set: Not Supported 00:11:29.989 Extended LBA Formats Supported: Supported 00:11:29.989 Flexible Data Placement Supported: Not Supported 00:11:29.989 00:11:29.989 Controller Memory Buffer Support 00:11:29.989 ================================ 00:11:29.989 Supported: No 00:11:29.989 00:11:29.989 Persistent Memory Region Support 00:11:29.989 ================================ 00:11:29.989 Supported: No 00:11:29.989 00:11:29.989 Admin Command Set Attributes 00:11:29.989 ============================ 00:11:29.989 Security Send/Receive: Not Supported 00:11:29.989 Format NVM: Supported 00:11:29.989 Firmware Activate/Download: Not Supported 00:11:29.989 Namespace Management: Supported 00:11:29.989 Device Self-Test: Not Supported 00:11:29.989 Directives: Supported 00:11:29.989 NVMe-MI: Not Supported 00:11:29.989 Virtualization Management: Not Supported 00:11:29.989 Doorbell Buffer Config: Supported 00:11:29.989 Get LBA Status Capability: Not Supported 00:11:29.989 Command & Feature Lockdown Capability: Not Supported 00:11:29.989 Abort Command Limit: 4 00:11:29.989 Async Event Request Limit: 4 00:11:29.989 Number of Firmware Slots: N/A 00:11:29.989 Firmware Slot 1 Read-Only: N/A 00:11:29.989 Firmware Activation Without Reset: N/A 00:11:29.989 Multiple Update Detection Support: N/A 00:11:29.989 Firmware Update Granularity: No Information Provided 00:11:29.989 Per-Namespace SMART Log: Yes 00:11:29.989 Asymmetric Namespace Access Log Page: Not Supported 00:11:29.989 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:29.989 Command Effects Log Page: Supported 00:11:29.989 Get Log Page Extended Data: Supported 00:11:29.989 Telemetry Log Pages: Not Supported 00:11:29.989 Persistent Event Log Pages: Not Supported 00:11:29.989 Supported Log Pages Log Page: May Support 00:11:29.989 Commands Supported & Effects Log Page: Not Supported 00:11:29.989 Feature Identifiers & Effects Log Page:May Support 00:11:29.989 NVMe-MI Commands & Effects Log Page: May Support 00:11:29.989 Data Area 4 for Telemetry Log: Not Supported 00:11:29.989 Error Log Page Entries Supported: 1 00:11:29.989 Keep Alive: Not Supported 00:11:29.989 00:11:29.989 NVM Command Set Attributes 00:11:29.989 ========================== 00:11:29.989 Submission Queue Entry Size 00:11:29.989 Max: 64 00:11:29.989 Min: 64 00:11:29.989 Completion Queue Entry Size 00:11:29.989 Max: 16 00:11:29.989 Min: 16 00:11:29.989 Number of Namespaces: 256 00:11:29.989 Compare Command: Supported 00:11:29.989 Write Uncorrectable Command: Not Supported 00:11:29.989 Dataset Management Command: Supported 00:11:29.989 Write Zeroes Command: Supported 00:11:29.989 Set Features Save Field: Supported 00:11:29.989 Reservations: Not Supported 00:11:29.989 Timestamp: Supported 00:11:29.989 Copy: Supported 00:11:29.989 Volatile Write Cache: Present 00:11:29.989 Atomic Write Unit (Normal): 1 00:11:29.989 Atomic Write Unit (PFail): 1 00:11:29.989 Atomic Compare & Write Unit: 1 00:11:29.989 Fused Compare & Write: Not Supported 00:11:29.989 Scatter-Gather List 00:11:29.989 SGL Command Set: Supported 00:11:29.989 SGL Keyed: Not Supported 00:11:29.989 SGL Bit Bucket Descriptor: Not Supported 00:11:29.989 SGL Metadata Pointer: Not Supported 00:11:29.989 Oversized SGL: Not Supported 00:11:29.989 SGL Metadata Address: Not Supported 00:11:29.989 SGL Offset: Not Supported 00:11:29.989 Transport SGL Data Block: Not Supported 00:11:29.989 Replay Protected Memory Block: Not Supported 00:11:29.989 00:11:29.989 Firmware Slot Information 00:11:29.989 ========================= 00:11:29.989 Active slot: 1 00:11:29.989 Slot 1 Firmware Revision: 1.0 00:11:29.989 00:11:29.989 00:11:29.989 Commands Supported and Effects 00:11:29.989 ============================== 00:11:29.989 Admin Commands 00:11:29.989 -------------- 00:11:29.989 Delete I/O Submission Queue (00h): Supported 00:11:29.989 Create I/O Submission Queue (01h): Supported 00:11:29.989 Get Log Page (02h): Supported 00:11:29.989 Delete I/O Completion Queue (04h): Supported 00:11:29.989 Create I/O Completion Queue (05h): Supported 00:11:29.989 Identify (06h): Supported 00:11:29.989 Abort (08h): Supported 00:11:29.989 Set Features (09h): Supported 00:11:29.989 Get Features (0Ah): Supported 00:11:29.989 Asynchronous Event Request (0Ch): Supported 00:11:29.989 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:29.989 Directive Send (19h): Supported 00:11:29.989 Directive Receive (1Ah): Supported 00:11:29.989 Virtualization Management (1Ch): Supported 00:11:29.989 Doorbell Buffer Config (7Ch): Supported 00:11:29.989 Format NVM (80h): Supported LBA-Change 00:11:29.989 I/O Commands 00:11:29.989 ------------ 00:11:29.989 Flush (00h): Supported LBA-Change 00:11:29.989 Write (01h): Supported LBA-Change 00:11:29.989 Read (02h): Supported 00:11:29.989 Compare (05h): Supported 00:11:29.989 Write Zeroes (08h): Supported LBA-Change 00:11:29.989 Dataset Management (09h): Supported LBA-Change 00:11:29.989 Unknown (0Ch): Supported 00:11:29.989 Unknown (12h): Supported 00:11:29.989 Copy (19h): Supported LBA-Change 00:11:29.989 Unknown (1Dh): Supported LBA-Change 00:11:29.989 00:11:29.989 Error Log 00:11:29.989 ========= 00:11:29.989 00:11:29.989 Arbitration 00:11:29.989 =========== 00:11:29.989 Arbitration Burst: no limit 00:11:29.989 00:11:29.989 Power Management 00:11:29.989 ================ 00:11:29.989 Number of Power States: 1 00:11:29.989 Current Power State: Power State #0 00:11:29.989 Power State #0: 00:11:29.989 Max Power: 25.00 W 00:11:29.989 Non-Operational State: Operational 00:11:29.989 Entry Latency: 16 microseconds 00:11:29.989 Exit Latency: 4 microseconds 00:11:29.989 Relative Read Throughput: 0 00:11:29.989 Relative Read Latency: 0 00:11:29.989 Relative Write Throughput: 0 00:11:29.989 Relative Write Latency: 0 00:11:29.989 Idle Power: Not Reported 00:11:29.989 Active Power: Not Reported 00:11:29.989 Non-Operational Permissive Mode: Not Supported 00:11:29.989 00:11:29.989 Health Information 00:11:29.989 ================== 00:11:29.989 Critical Warnings: 00:11:29.989 Available Spare Space: OK 00:11:29.989 Temperature: OK 00:11:29.989 Device Reliability: OK 00:11:29.990 Read Only: No 00:11:29.990 Volatile Memory Backup: OK 00:11:29.990 Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.990 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:29.990 Available Spare: 0% 00:11:29.990 Available Spare Threshold: 0% 00:11:29.990 Life Percentage Used: 0% 00:11:29.990 Data Units Read: 2123 00:11:29.990 Data Units Written: 1803 00:11:29.990 Host Read Commands: 101826 00:11:29.990 Host Write Commands: 97596 00:11:29.990 Controller Busy Time: 0 minutes 00:11:29.990 Power Cycles: 0 00:11:29.990 Power On Hours: 0 hours 00:11:29.990 Unsafe Shutdowns: 0 00:11:29.990 Unrecoverable Media Errors: 0 00:11:29.990 Lifetime Error Log Entries: 0 00:11:29.990 Warning Temperature Time: 0 minutes 00:11:29.990 Critical Temperature Time: 0 minutes 00:11:29.990 00:11:29.990 Number of Queues 00:11:29.990 ================ 00:11:29.990 Number of I/O Submission Queues: 64 00:11:29.990 Number of I/O Completion Queues: 64 00:11:29.990 00:11:29.990 ZNS Specific Controller Data 00:11:29.990 ============================ 00:11:29.990 Zone Append Size Limit: 0 00:11:29.990 00:11:29.990 00:11:29.990 Active Namespaces 00:11:29.990 ================= 00:11:29.990 Namespace ID:1 00:11:29.990 Error Recovery Timeout: Unlimited 00:11:29.990 Command Set Identifier: NVM (00h) 00:11:29.990 Deallocate: Supported 00:11:29.990 Deallocated/Unwritten Error: Supported 00:11:29.990 Deallocated Read Value: All 0x00 00:11:29.990 Deallocate in Write Zeroes: Not Supported 00:11:29.990 Deallocated Guard Field: 0xFFFF 00:11:29.990 Flush: Supported 00:11:29.990 Reservation: Not Supported 00:11:29.990 Namespace Sharing Capabilities: Private 00:11:29.990 Size (in LBAs): 1048576 (4GiB) 00:11:29.990 Capacity (in LBAs): 1048576 (4GiB) 00:11:29.990 Utilization (in LBAs): 1048576 (4GiB) 00:11:29.990 Thin Provisioning: Not Supported 00:11:29.990 Per-NS Atomic Units: No 00:11:29.990 Maximum Single Source Range Length: 128 00:11:29.990 Maximum Copy Length: 128 00:11:29.990 Maximum Source Range Count: 128 00:11:29.990 NGUID/EUI64 Never Reused: No 00:11:29.990 Namespace Write Protected: No 00:11:29.990 Number of LBA Formats: 8 00:11:29.990 Current LBA Format: LBA Format #04 00:11:29.990 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.990 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.990 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.990 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.990 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.990 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.990 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.990 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.990 00:11:29.990 NVM Specific Namespace Data 00:11:29.990 =========================== 00:11:29.990 Logical Block Storage Tag Mask: 0 00:11:29.990 Protection Information Capabilities: 00:11:29.990 16b Guard Protection Information Storage Tag Support: No 00:11:29.990 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.990 Storage Tag Check Read Support: No 00:11:29.990 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Namespace ID:2 00:11:29.990 Error Recovery Timeout: Unlimited 00:11:29.990 Command Set Identifier: NVM (00h) 00:11:29.990 Deallocate: Supported 00:11:29.990 Deallocated/Unwritten Error: Supported 00:11:29.990 Deallocated Read Value: All 0x00 00:11:29.990 Deallocate in Write Zeroes: Not Supported 00:11:29.990 Deallocated Guard Field: 0xFFFF 00:11:29.990 Flush: Supported 00:11:29.990 Reservation: Not Supported 00:11:29.990 Namespace Sharing Capabilities: Private 00:11:29.990 Size (in LBAs): 1048576 (4GiB) 00:11:29.990 Capacity (in LBAs): 1048576 (4GiB) 00:11:29.990 Utilization (in LBAs): 1048576 (4GiB) 00:11:29.990 Thin Provisioning: Not Supported 00:11:29.990 Per-NS Atomic Units: No 00:11:29.990 Maximum Single Source Range Length: 128 00:11:29.990 Maximum Copy Length: 128 00:11:29.990 Maximum Source Range Count: 128 00:11:29.990 NGUID/EUI64 Never Reused: No 00:11:29.990 Namespace Write Protected: No 00:11:29.990 Number of LBA Formats: 8 00:11:29.990 Current LBA Format: LBA Format #04 00:11:29.990 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.990 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.990 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.990 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.990 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.990 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.990 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.990 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.990 00:11:29.990 NVM Specific Namespace Data 00:11:29.990 =========================== 00:11:29.990 Logical Block Storage Tag Mask: 0 00:11:29.990 Protection Information Capabilities: 00:11:29.990 16b Guard Protection Information Storage Tag Support: No 00:11:29.990 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.990 Storage Tag Check Read Support: No 00:11:29.990 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.990 Namespace ID:3 00:11:29.990 Error Recovery Timeout: Unlimited 00:11:29.990 Command Set Identifier: NVM (00h) 00:11:29.990 Deallocate: Supported 00:11:29.990 Deallocated/Unwritten Error: Supported 00:11:29.990 Deallocated Read Value: All 0x00 00:11:29.990 Deallocate in Write Zeroes: Not Supported 00:11:29.990 Deallocated Guard Field: 0xFFFF 00:11:29.990 Flush: Supported 00:11:29.990 Reservation: Not Supported 00:11:29.990 Namespace Sharing Capabilities: Private 00:11:29.990 Size (in LBAs): 1048576 (4GiB) 00:11:29.990 Capacity (in LBAs): 1048576 (4GiB) 00:11:29.990 Utilization (in LBAs): 1048576 (4GiB) 00:11:29.990 Thin Provisioning: Not Supported 00:11:29.990 Per-NS Atomic Units: No 00:11:29.990 Maximum Single Source Range Length: 128 00:11:29.990 Maximum Copy Length: 128 00:11:29.990 Maximum Source Range Count: 128 00:11:29.990 NGUID/EUI64 Never Reused: No 00:11:29.990 Namespace Write Protected: No 00:11:29.990 Number of LBA Formats: 8 00:11:29.990 Current LBA Format: LBA Format #04 00:11:29.990 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:29.990 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:29.990 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:29.990 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:29.990 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:29.990 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:29.990 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:29.990 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:29.990 00:11:29.990 NVM Specific Namespace Data 00:11:29.990 =========================== 00:11:29.991 Logical Block Storage Tag Mask: 0 00:11:29.991 Protection Information Capabilities: 00:11:29.991 16b Guard Protection Information Storage Tag Support: No 00:11:29.991 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:29.991 Storage Tag Check Read Support: No 00:11:29.991 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:29.991 09:17:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:29.991 09:17:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:30.251 ===================================================== 00:11:30.251 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:30.251 ===================================================== 00:11:30.251 Controller Capabilities/Features 00:11:30.251 ================================ 00:11:30.251 Vendor ID: 1b36 00:11:30.251 Subsystem Vendor ID: 1af4 00:11:30.251 Serial Number: 12343 00:11:30.251 Model Number: QEMU NVMe Ctrl 00:11:30.251 Firmware Version: 8.0.0 00:11:30.251 Recommended Arb Burst: 6 00:11:30.251 IEEE OUI Identifier: 00 54 52 00:11:30.251 Multi-path I/O 00:11:30.251 May have multiple subsystem ports: No 00:11:30.251 May have multiple controllers: Yes 00:11:30.251 Associated with SR-IOV VF: No 00:11:30.251 Max Data Transfer Size: 524288 00:11:30.251 Max Number of Namespaces: 256 00:11:30.251 Max Number of I/O Queues: 64 00:11:30.251 NVMe Specification Version (VS): 1.4 00:11:30.251 NVMe Specification Version (Identify): 1.4 00:11:30.251 Maximum Queue Entries: 2048 00:11:30.251 Contiguous Queues Required: Yes 00:11:30.251 Arbitration Mechanisms Supported 00:11:30.251 Weighted Round Robin: Not Supported 00:11:30.251 Vendor Specific: Not Supported 00:11:30.251 Reset Timeout: 7500 ms 00:11:30.251 Doorbell Stride: 4 bytes 00:11:30.251 NVM Subsystem Reset: Not Supported 00:11:30.251 Command Sets Supported 00:11:30.251 NVM Command Set: Supported 00:11:30.251 Boot Partition: Not Supported 00:11:30.251 Memory Page Size Minimum: 4096 bytes 00:11:30.251 Memory Page Size Maximum: 65536 bytes 00:11:30.251 Persistent Memory Region: Not Supported 00:11:30.251 Optional Asynchronous Events Supported 00:11:30.251 Namespace Attribute Notices: Supported 00:11:30.251 Firmware Activation Notices: Not Supported 00:11:30.251 ANA Change Notices: Not Supported 00:11:30.251 PLE Aggregate Log Change Notices: Not Supported 00:11:30.251 LBA Status Info Alert Notices: Not Supported 00:11:30.251 EGE Aggregate Log Change Notices: Not Supported 00:11:30.251 Normal NVM Subsystem Shutdown event: Not Supported 00:11:30.251 Zone Descriptor Change Notices: Not Supported 00:11:30.251 Discovery Log Change Notices: Not Supported 00:11:30.251 Controller Attributes 00:11:30.251 128-bit Host Identifier: Not Supported 00:11:30.251 Non-Operational Permissive Mode: Not Supported 00:11:30.251 NVM Sets: Not Supported 00:11:30.251 Read Recovery Levels: Not Supported 00:11:30.251 Endurance Groups: Supported 00:11:30.251 Predictable Latency Mode: Not Supported 00:11:30.251 Traffic Based Keep ALive: Not Supported 00:11:30.251 Namespace Granularity: Not Supported 00:11:30.251 SQ Associations: Not Supported 00:11:30.251 UUID List: Not Supported 00:11:30.251 Multi-Domain Subsystem: Not Supported 00:11:30.251 Fixed Capacity Management: Not Supported 00:11:30.251 Variable Capacity Management: Not Supported 00:11:30.251 Delete Endurance Group: Not Supported 00:11:30.251 Delete NVM Set: Not Supported 00:11:30.251 Extended LBA Formats Supported: Supported 00:11:30.251 Flexible Data Placement Supported: Supported 00:11:30.251 00:11:30.251 Controller Memory Buffer Support 00:11:30.251 ================================ 00:11:30.251 Supported: No 00:11:30.251 00:11:30.251 Persistent Memory Region Support 00:11:30.251 ================================ 00:11:30.251 Supported: No 00:11:30.251 00:11:30.251 Admin Command Set Attributes 00:11:30.251 ============================ 00:11:30.251 Security Send/Receive: Not Supported 00:11:30.251 Format NVM: Supported 00:11:30.251 Firmware Activate/Download: Not Supported 00:11:30.251 Namespace Management: Supported 00:11:30.251 Device Self-Test: Not Supported 00:11:30.251 Directives: Supported 00:11:30.251 NVMe-MI: Not Supported 00:11:30.251 Virtualization Management: Not Supported 00:11:30.251 Doorbell Buffer Config: Supported 00:11:30.251 Get LBA Status Capability: Not Supported 00:11:30.251 Command & Feature Lockdown Capability: Not Supported 00:11:30.251 Abort Command Limit: 4 00:11:30.251 Async Event Request Limit: 4 00:11:30.251 Number of Firmware Slots: N/A 00:11:30.251 Firmware Slot 1 Read-Only: N/A 00:11:30.251 Firmware Activation Without Reset: N/A 00:11:30.251 Multiple Update Detection Support: N/A 00:11:30.251 Firmware Update Granularity: No Information Provided 00:11:30.251 Per-Namespace SMART Log: Yes 00:11:30.251 Asymmetric Namespace Access Log Page: Not Supported 00:11:30.251 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:30.251 Command Effects Log Page: Supported 00:11:30.251 Get Log Page Extended Data: Supported 00:11:30.251 Telemetry Log Pages: Not Supported 00:11:30.251 Persistent Event Log Pages: Not Supported 00:11:30.251 Supported Log Pages Log Page: May Support 00:11:30.251 Commands Supported & Effects Log Page: Not Supported 00:11:30.251 Feature Identifiers & Effects Log Page:May Support 00:11:30.251 NVMe-MI Commands & Effects Log Page: May Support 00:11:30.251 Data Area 4 for Telemetry Log: Not Supported 00:11:30.251 Error Log Page Entries Supported: 1 00:11:30.251 Keep Alive: Not Supported 00:11:30.251 00:11:30.251 NVM Command Set Attributes 00:11:30.251 ========================== 00:11:30.251 Submission Queue Entry Size 00:11:30.251 Max: 64 00:11:30.251 Min: 64 00:11:30.251 Completion Queue Entry Size 00:11:30.251 Max: 16 00:11:30.251 Min: 16 00:11:30.251 Number of Namespaces: 256 00:11:30.251 Compare Command: Supported 00:11:30.251 Write Uncorrectable Command: Not Supported 00:11:30.251 Dataset Management Command: Supported 00:11:30.251 Write Zeroes Command: Supported 00:11:30.251 Set Features Save Field: Supported 00:11:30.251 Reservations: Not Supported 00:11:30.251 Timestamp: Supported 00:11:30.251 Copy: Supported 00:11:30.251 Volatile Write Cache: Present 00:11:30.251 Atomic Write Unit (Normal): 1 00:11:30.251 Atomic Write Unit (PFail): 1 00:11:30.251 Atomic Compare & Write Unit: 1 00:11:30.251 Fused Compare & Write: Not Supported 00:11:30.251 Scatter-Gather List 00:11:30.251 SGL Command Set: Supported 00:11:30.251 SGL Keyed: Not Supported 00:11:30.251 SGL Bit Bucket Descriptor: Not Supported 00:11:30.251 SGL Metadata Pointer: Not Supported 00:11:30.251 Oversized SGL: Not Supported 00:11:30.251 SGL Metadata Address: Not Supported 00:11:30.251 SGL Offset: Not Supported 00:11:30.251 Transport SGL Data Block: Not Supported 00:11:30.251 Replay Protected Memory Block: Not Supported 00:11:30.251 00:11:30.251 Firmware Slot Information 00:11:30.251 ========================= 00:11:30.251 Active slot: 1 00:11:30.251 Slot 1 Firmware Revision: 1.0 00:11:30.251 00:11:30.251 00:11:30.251 Commands Supported and Effects 00:11:30.251 ============================== 00:11:30.251 Admin Commands 00:11:30.251 -------------- 00:11:30.251 Delete I/O Submission Queue (00h): Supported 00:11:30.251 Create I/O Submission Queue (01h): Supported 00:11:30.251 Get Log Page (02h): Supported 00:11:30.251 Delete I/O Completion Queue (04h): Supported 00:11:30.251 Create I/O Completion Queue (05h): Supported 00:11:30.251 Identify (06h): Supported 00:11:30.251 Abort (08h): Supported 00:11:30.251 Set Features (09h): Supported 00:11:30.251 Get Features (0Ah): Supported 00:11:30.251 Asynchronous Event Request (0Ch): Supported 00:11:30.251 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:30.251 Directive Send (19h): Supported 00:11:30.251 Directive Receive (1Ah): Supported 00:11:30.251 Virtualization Management (1Ch): Supported 00:11:30.251 Doorbell Buffer Config (7Ch): Supported 00:11:30.251 Format NVM (80h): Supported LBA-Change 00:11:30.251 I/O Commands 00:11:30.252 ------------ 00:11:30.252 Flush (00h): Supported LBA-Change 00:11:30.252 Write (01h): Supported LBA-Change 00:11:30.252 Read (02h): Supported 00:11:30.252 Compare (05h): Supported 00:11:30.252 Write Zeroes (08h): Supported LBA-Change 00:11:30.252 Dataset Management (09h): Supported LBA-Change 00:11:30.252 Unknown (0Ch): Supported 00:11:30.252 Unknown (12h): Supported 00:11:30.252 Copy (19h): Supported LBA-Change 00:11:30.252 Unknown (1Dh): Supported LBA-Change 00:11:30.252 00:11:30.252 Error Log 00:11:30.252 ========= 00:11:30.252 00:11:30.252 Arbitration 00:11:30.252 =========== 00:11:30.252 Arbitration Burst: no limit 00:11:30.252 00:11:30.252 Power Management 00:11:30.252 ================ 00:11:30.252 Number of Power States: 1 00:11:30.252 Current Power State: Power State #0 00:11:30.252 Power State #0: 00:11:30.252 Max Power: 25.00 W 00:11:30.252 Non-Operational State: Operational 00:11:30.252 Entry Latency: 16 microseconds 00:11:30.252 Exit Latency: 4 microseconds 00:11:30.252 Relative Read Throughput: 0 00:11:30.252 Relative Read Latency: 0 00:11:30.252 Relative Write Throughput: 0 00:11:30.252 Relative Write Latency: 0 00:11:30.252 Idle Power: Not Reported 00:11:30.252 Active Power: Not Reported 00:11:30.252 Non-Operational Permissive Mode: Not Supported 00:11:30.252 00:11:30.252 Health Information 00:11:30.252 ================== 00:11:30.252 Critical Warnings: 00:11:30.252 Available Spare Space: OK 00:11:30.252 Temperature: OK 00:11:30.252 Device Reliability: OK 00:11:30.252 Read Only: No 00:11:30.252 Volatile Memory Backup: OK 00:11:30.252 Current Temperature: 323 Kelvin (50 Celsius) 00:11:30.252 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:30.252 Available Spare: 0% 00:11:30.252 Available Spare Threshold: 0% 00:11:30.252 Life Percentage Used: 0% 00:11:30.252 Data Units Read: 803 00:11:30.252 Data Units Written: 696 00:11:30.252 Host Read Commands: 34683 00:11:30.252 Host Write Commands: 33273 00:11:30.252 Controller Busy Time: 0 minutes 00:11:30.252 Power Cycles: 0 00:11:30.252 Power On Hours: 0 hours 00:11:30.252 Unsafe Shutdowns: 0 00:11:30.252 Unrecoverable Media Errors: 0 00:11:30.252 Lifetime Error Log Entries: 0 00:11:30.252 Warning Temperature Time: 0 minutes 00:11:30.252 Critical Temperature Time: 0 minutes 00:11:30.252 00:11:30.252 Number of Queues 00:11:30.252 ================ 00:11:30.252 Number of I/O Submission Queues: 64 00:11:30.252 Number of I/O Completion Queues: 64 00:11:30.252 00:11:30.252 ZNS Specific Controller Data 00:11:30.252 ============================ 00:11:30.252 Zone Append Size Limit: 0 00:11:30.252 00:11:30.252 00:11:30.252 Active Namespaces 00:11:30.252 ================= 00:11:30.252 Namespace ID:1 00:11:30.252 Error Recovery Timeout: Unlimited 00:11:30.252 Command Set Identifier: NVM (00h) 00:11:30.252 Deallocate: Supported 00:11:30.252 Deallocated/Unwritten Error: Supported 00:11:30.252 Deallocated Read Value: All 0x00 00:11:30.252 Deallocate in Write Zeroes: Not Supported 00:11:30.252 Deallocated Guard Field: 0xFFFF 00:11:30.252 Flush: Supported 00:11:30.252 Reservation: Not Supported 00:11:30.252 Namespace Sharing Capabilities: Multiple Controllers 00:11:30.252 Size (in LBAs): 262144 (1GiB) 00:11:30.252 Capacity (in LBAs): 262144 (1GiB) 00:11:30.252 Utilization (in LBAs): 262144 (1GiB) 00:11:30.252 Thin Provisioning: Not Supported 00:11:30.252 Per-NS Atomic Units: No 00:11:30.252 Maximum Single Source Range Length: 128 00:11:30.252 Maximum Copy Length: 128 00:11:30.252 Maximum Source Range Count: 128 00:11:30.252 NGUID/EUI64 Never Reused: No 00:11:30.252 Namespace Write Protected: No 00:11:30.252 Endurance group ID: 1 00:11:30.252 Number of LBA Formats: 8 00:11:30.252 Current LBA Format: LBA Format #04 00:11:30.252 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:30.252 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:30.252 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:30.252 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:30.252 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:30.252 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:30.252 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:30.252 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:30.252 00:11:30.252 Get Feature FDP: 00:11:30.252 ================ 00:11:30.252 Enabled: Yes 00:11:30.252 FDP configuration index: 0 00:11:30.252 00:11:30.252 FDP configurations log page 00:11:30.252 =========================== 00:11:30.252 Number of FDP configurations: 1 00:11:30.252 Version: 0 00:11:30.252 Size: 112 00:11:30.252 FDP Configuration Descriptor: 0 00:11:30.252 Descriptor Size: 96 00:11:30.252 Reclaim Group Identifier format: 2 00:11:30.252 FDP Volatile Write Cache: Not Present 00:11:30.252 FDP Configuration: Valid 00:11:30.252 Vendor Specific Size: 0 00:11:30.252 Number of Reclaim Groups: 2 00:11:30.252 Number of Recalim Unit Handles: 8 00:11:30.252 Max Placement Identifiers: 128 00:11:30.252 Number of Namespaces Suppprted: 256 00:11:30.252 Reclaim unit Nominal Size: 6000000 bytes 00:11:30.252 Estimated Reclaim Unit Time Limit: Not Reported 00:11:30.252 RUH Desc #000: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #001: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #002: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #003: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #004: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #005: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #006: RUH Type: Initially Isolated 00:11:30.252 RUH Desc #007: RUH Type: Initially Isolated 00:11:30.252 00:11:30.252 FDP reclaim unit handle usage log page 00:11:30.252 ====================================== 00:11:30.252 Number of Reclaim Unit Handles: 8 00:11:30.252 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:30.252 RUH Usage Desc #001: RUH Attributes: Unused 00:11:30.252 RUH Usage Desc #002: RUH Attributes: Unused 00:11:30.252 RUH Usage Desc #003: RUH Attributes: Unused 00:11:30.252 RUH Usage Desc #004: RUH Attributes: Unused 00:11:30.252 RUH Usage Desc #005: RUH Attributes: Unused 00:11:30.252 RUH Usage Desc #006: RUH Attributes: Unused 00:11:30.252 RUH Usage Desc #007: RUH Attributes: Unused 00:11:30.252 00:11:30.252 FDP statistics log page 00:11:30.252 ======================= 00:11:30.252 Host bytes with metadata written: 434282496 00:11:30.252 Media bytes with metadata written: 434348032 00:11:30.252 Media bytes erased: 0 00:11:30.252 00:11:30.252 FDP events log page 00:11:30.252 =================== 00:11:30.252 Number of FDP events: 0 00:11:30.252 00:11:30.252 NVM Specific Namespace Data 00:11:30.252 =========================== 00:11:30.252 Logical Block Storage Tag Mask: 0 00:11:30.252 Protection Information Capabilities: 00:11:30.252 16b Guard Protection Information Storage Tag Support: No 00:11:30.252 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:30.252 Storage Tag Check Read Support: No 00:11:30.252 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:30.252 00:11:30.252 real 0m1.601s 00:11:30.252 user 0m0.697s 00:11:30.252 sys 0m0.723s 00:11:30.252 09:17:16 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.252 09:17:16 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:30.252 ************************************ 00:11:30.252 END TEST nvme_identify 00:11:30.252 ************************************ 00:11:30.252 09:17:16 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:30.252 09:17:16 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:30.252 09:17:16 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:30.252 09:17:16 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.252 09:17:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:30.252 ************************************ 00:11:30.252 START TEST nvme_perf 00:11:30.252 ************************************ 00:11:30.252 09:17:16 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:11:30.252 09:17:16 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:31.630 Initializing NVMe Controllers 00:11:31.630 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:31.630 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:31.630 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:31.630 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:31.630 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:31.630 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:31.630 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:31.630 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:31.630 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:31.630 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:31.630 Initialization complete. Launching workers. 00:11:31.630 ======================================================== 00:11:31.630 Latency(us) 00:11:31.630 Device Information : IOPS MiB/s Average min max 00:11:31.630 PCIE (0000:00:10.0) NSID 1 from core 0: 12504.21 146.53 10251.21 7559.39 39823.39 00:11:31.630 PCIE (0000:00:11.0) NSID 1 from core 0: 12504.21 146.53 10227.95 7265.23 37415.20 00:11:31.630 PCIE (0000:00:13.0) NSID 1 from core 0: 12504.21 146.53 10202.58 7591.41 35321.77 00:11:31.630 PCIE (0000:00:12.0) NSID 1 from core 0: 12504.21 146.53 10176.93 7611.47 32794.26 00:11:31.630 PCIE (0000:00:12.0) NSID 2 from core 0: 12504.21 146.53 10150.88 7632.36 30239.78 00:11:31.630 PCIE (0000:00:12.0) NSID 3 from core 0: 12504.21 146.53 10125.83 7658.97 28082.62 00:11:31.630 ======================================================== 00:11:31.630 Total : 75025.27 879.20 10189.23 7265.23 39823.39 00:11:31.630 00:11:31.630 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:31.630 ================================================================================= 00:11:31.630 1.00000% : 7804.742us 00:11:31.630 10.00000% : 8281.367us 00:11:31.630 25.00000% : 8817.571us 00:11:31.630 50.00000% : 9472.931us 00:11:31.630 75.00000% : 10187.869us 00:11:31.630 90.00000% : 12988.044us 00:11:31.630 95.00000% : 14417.920us 00:11:31.630 98.00000% : 26214.400us 00:11:31.630 99.00000% : 29669.935us 00:11:31.630 99.50000% : 37415.098us 00:11:31.630 99.90000% : 39321.600us 00:11:31.630 99.99000% : 39798.225us 00:11:31.630 99.99900% : 40036.538us 00:11:31.630 99.99990% : 40036.538us 00:11:31.630 99.99999% : 40036.538us 00:11:31.630 00:11:31.630 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:31.630 ================================================================================= 00:11:31.630 1.00000% : 7864.320us 00:11:31.630 10.00000% : 8340.945us 00:11:31.630 25.00000% : 8817.571us 00:11:31.630 50.00000% : 9472.931us 00:11:31.630 75.00000% : 10128.291us 00:11:31.630 90.00000% : 13047.622us 00:11:31.630 95.00000% : 14477.498us 00:11:31.630 98.00000% : 25022.836us 00:11:31.630 99.00000% : 27763.433us 00:11:31.630 99.50000% : 35031.971us 00:11:31.630 99.90000% : 37176.785us 00:11:31.630 99.99000% : 37415.098us 00:11:31.630 99.99900% : 37653.411us 00:11:31.630 99.99990% : 37653.411us 00:11:31.630 99.99999% : 37653.411us 00:11:31.630 00:11:31.630 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:31.630 ================================================================================= 00:11:31.630 1.00000% : 7923.898us 00:11:31.630 10.00000% : 8340.945us 00:11:31.630 25.00000% : 8817.571us 00:11:31.630 50.00000% : 9472.931us 00:11:31.630 75.00000% : 10128.291us 00:11:31.630 90.00000% : 13047.622us 00:11:31.630 95.00000% : 14298.764us 00:11:31.630 98.00000% : 25022.836us 00:11:31.630 99.00000% : 27048.495us 00:11:31.630 99.50000% : 32887.156us 00:11:31.630 99.90000% : 35031.971us 00:11:31.630 99.99000% : 35508.596us 00:11:31.630 99.99900% : 35508.596us 00:11:31.630 99.99990% : 35508.596us 00:11:31.630 99.99999% : 35508.596us 00:11:31.630 00:11:31.630 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:31.630 ================================================================================= 00:11:31.630 1.00000% : 7864.320us 00:11:31.630 10.00000% : 8340.945us 00:11:31.630 25.00000% : 8817.571us 00:11:31.630 50.00000% : 9472.931us 00:11:31.630 75.00000% : 10068.713us 00:11:31.630 90.00000% : 13047.622us 00:11:31.630 95.00000% : 14298.764us 00:11:31.630 98.00000% : 22997.178us 00:11:31.630 99.00000% : 26810.182us 00:11:31.630 99.50000% : 30384.873us 00:11:31.630 99.90000% : 32410.531us 00:11:31.630 99.99000% : 32887.156us 00:11:31.630 99.99900% : 32887.156us 00:11:31.630 99.99990% : 32887.156us 00:11:31.630 99.99999% : 32887.156us 00:11:31.630 00:11:31.630 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:31.630 ================================================================================= 00:11:31.630 1.00000% : 7923.898us 00:11:31.630 10.00000% : 8340.945us 00:11:31.630 25.00000% : 8817.571us 00:11:31.630 50.00000% : 9472.931us 00:11:31.630 75.00000% : 10068.713us 00:11:31.630 90.00000% : 13047.622us 00:11:31.630 95.00000% : 14298.764us 00:11:31.630 98.00000% : 20494.895us 00:11:31.630 99.00000% : 26929.338us 00:11:31.630 99.50000% : 28120.902us 00:11:31.630 99.90000% : 29789.091us 00:11:31.630 99.99000% : 30265.716us 00:11:31.630 99.99900% : 30265.716us 00:11:31.630 99.99990% : 30265.716us 00:11:31.630 99.99999% : 30265.716us 00:11:31.630 00:11:31.630 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:31.630 ================================================================================= 00:11:31.630 1.00000% : 7923.898us 00:11:31.630 10.00000% : 8340.945us 00:11:31.630 25.00000% : 8817.571us 00:11:31.630 50.00000% : 9472.931us 00:11:31.630 75.00000% : 10068.713us 00:11:31.630 90.00000% : 12988.044us 00:11:31.630 95.00000% : 14358.342us 00:11:31.630 98.00000% : 18111.767us 00:11:31.630 99.00000% : 26691.025us 00:11:31.630 99.50000% : 27286.807us 00:11:31.630 99.90000% : 27882.589us 00:11:31.630 99.99000% : 28120.902us 00:11:31.630 99.99900% : 28120.902us 00:11:31.630 99.99990% : 28120.902us 00:11:31.630 99.99999% : 28120.902us 00:11:31.630 00:11:31.630 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:31.630 ============================================================================== 00:11:31.630 Range in us Cumulative IO count 00:11:31.630 7536.640 - 7566.429: 0.0239% ( 3) 00:11:31.630 7596.218 - 7626.007: 0.0478% ( 3) 00:11:31.630 7626.007 - 7685.585: 0.2073% ( 20) 00:11:31.630 7685.585 - 7745.164: 0.6218% ( 52) 00:11:31.630 7745.164 - 7804.742: 1.2117% ( 74) 00:11:31.630 7804.742 - 7864.320: 1.9691% ( 95) 00:11:31.630 7864.320 - 7923.898: 2.7663% ( 100) 00:11:31.630 7923.898 - 7983.476: 3.7468% ( 123) 00:11:31.630 7983.476 - 8043.055: 4.8151% ( 134) 00:11:31.630 8043.055 - 8102.633: 6.0268% ( 152) 00:11:31.630 8102.633 - 8162.211: 7.4059% ( 173) 00:11:31.631 8162.211 - 8221.789: 8.8249% ( 178) 00:11:31.631 8221.789 - 8281.367: 10.3077% ( 186) 00:11:31.631 8281.367 - 8340.945: 11.9340% ( 204) 00:11:31.631 8340.945 - 8400.524: 13.5284% ( 200) 00:11:31.631 8400.524 - 8460.102: 15.0670% ( 193) 00:11:31.631 8460.102 - 8519.680: 16.7889% ( 216) 00:11:31.631 8519.680 - 8579.258: 18.3833% ( 200) 00:11:31.631 8579.258 - 8638.836: 20.1610% ( 223) 00:11:31.631 8638.836 - 8698.415: 21.8351% ( 210) 00:11:31.631 8698.415 - 8757.993: 23.6448% ( 227) 00:11:31.631 8757.993 - 8817.571: 25.5740% ( 242) 00:11:31.631 8817.571 - 8877.149: 27.5510% ( 248) 00:11:31.631 8877.149 - 8936.727: 29.5918% ( 256) 00:11:31.631 8936.727 - 8996.305: 31.9356% ( 294) 00:11:31.631 8996.305 - 9055.884: 34.5185% ( 324) 00:11:31.631 9055.884 - 9115.462: 36.9101% ( 300) 00:11:31.631 9115.462 - 9175.040: 39.5249% ( 328) 00:11:31.631 9175.040 - 9234.618: 42.1078% ( 324) 00:11:31.631 9234.618 - 9294.196: 44.4196% ( 290) 00:11:31.631 9294.196 - 9353.775: 46.8192% ( 301) 00:11:31.631 9353.775 - 9413.353: 49.1390% ( 291) 00:11:31.631 9413.353 - 9472.931: 51.5067% ( 297) 00:11:31.631 9472.931 - 9532.509: 53.7707% ( 284) 00:11:31.631 9532.509 - 9592.087: 56.0507% ( 286) 00:11:31.631 9592.087 - 9651.665: 58.2828% ( 280) 00:11:31.631 9651.665 - 9711.244: 60.4831% ( 276) 00:11:31.631 9711.244 - 9770.822: 62.7232% ( 281) 00:11:31.631 9770.822 - 9830.400: 64.8756% ( 270) 00:11:31.631 9830.400 - 9889.978: 66.9483% ( 260) 00:11:31.631 9889.978 - 9949.556: 69.1645% ( 278) 00:11:31.631 9949.556 - 10009.135: 71.0300% ( 234) 00:11:31.631 10009.135 - 10068.713: 72.6084% ( 198) 00:11:31.631 10068.713 - 10128.291: 74.1550% ( 194) 00:11:31.631 10128.291 - 10187.869: 75.3827% ( 154) 00:11:31.631 10187.869 - 10247.447: 76.3712% ( 124) 00:11:31.631 10247.447 - 10307.025: 77.1365% ( 96) 00:11:31.631 10307.025 - 10366.604: 77.6387% ( 63) 00:11:31.631 10366.604 - 10426.182: 78.1250% ( 61) 00:11:31.631 10426.182 - 10485.760: 78.5953% ( 59) 00:11:31.631 10485.760 - 10545.338: 78.9780% ( 48) 00:11:31.631 10545.338 - 10604.916: 79.2969% ( 40) 00:11:31.631 10604.916 - 10664.495: 79.7274% ( 54) 00:11:31.631 10664.495 - 10724.073: 80.1499% ( 53) 00:11:31.631 10724.073 - 10783.651: 80.5325% ( 48) 00:11:31.631 10783.651 - 10843.229: 80.8673% ( 42) 00:11:31.631 10843.229 - 10902.807: 81.2739% ( 51) 00:11:31.631 10902.807 - 10962.385: 81.5928% ( 40) 00:11:31.631 10962.385 - 11021.964: 81.9515% ( 45) 00:11:31.631 11021.964 - 11081.542: 82.3501% ( 50) 00:11:31.631 11081.542 - 11141.120: 82.6929% ( 43) 00:11:31.631 11141.120 - 11200.698: 83.0357% ( 43) 00:11:31.631 11200.698 - 11260.276: 83.3865% ( 44) 00:11:31.631 11260.276 - 11319.855: 83.6735% ( 36) 00:11:31.631 11319.855 - 11379.433: 83.9684% ( 37) 00:11:31.631 11379.433 - 11439.011: 84.2395% ( 34) 00:11:31.631 11439.011 - 11498.589: 84.5424% ( 38) 00:11:31.631 11498.589 - 11558.167: 84.8055% ( 33) 00:11:31.631 11558.167 - 11617.745: 85.0845% ( 35) 00:11:31.631 11617.745 - 11677.324: 85.2997% ( 27) 00:11:31.631 11677.324 - 11736.902: 85.4990% ( 25) 00:11:31.631 11736.902 - 11796.480: 85.7462% ( 31) 00:11:31.631 11796.480 - 11856.058: 85.9853% ( 30) 00:11:31.631 11856.058 - 11915.636: 86.2564% ( 34) 00:11:31.631 11915.636 - 11975.215: 86.4716% ( 27) 00:11:31.631 11975.215 - 12034.793: 86.6948% ( 28) 00:11:31.631 12034.793 - 12094.371: 86.9101% ( 27) 00:11:31.631 12094.371 - 12153.949: 87.1094% ( 25) 00:11:31.631 12153.949 - 12213.527: 87.2848% ( 22) 00:11:31.631 12213.527 - 12273.105: 87.4681% ( 23) 00:11:31.631 12273.105 - 12332.684: 87.6754% ( 26) 00:11:31.631 12332.684 - 12392.262: 87.8906% ( 27) 00:11:31.631 12392.262 - 12451.840: 88.0979% ( 26) 00:11:31.631 12451.840 - 12511.418: 88.2892% ( 24) 00:11:31.631 12511.418 - 12570.996: 88.4726% ( 23) 00:11:31.631 12570.996 - 12630.575: 88.6719% ( 25) 00:11:31.631 12630.575 - 12690.153: 88.9190% ( 31) 00:11:31.631 12690.153 - 12749.731: 89.1661% ( 31) 00:11:31.631 12749.731 - 12809.309: 89.4053% ( 30) 00:11:31.631 12809.309 - 12868.887: 89.6524% ( 31) 00:11:31.631 12868.887 - 12928.465: 89.8836% ( 29) 00:11:31.631 12928.465 - 12988.044: 90.1467% ( 33) 00:11:31.631 12988.044 - 13047.622: 90.3938% ( 31) 00:11:31.631 13047.622 - 13107.200: 90.6409% ( 31) 00:11:31.631 13107.200 - 13166.778: 90.9040% ( 33) 00:11:31.631 13166.778 - 13226.356: 91.1591% ( 32) 00:11:31.631 13226.356 - 13285.935: 91.3823% ( 28) 00:11:31.631 13285.935 - 13345.513: 91.6534% ( 34) 00:11:31.631 13345.513 - 13405.091: 91.9404% ( 36) 00:11:31.631 13405.091 - 13464.669: 92.1397% ( 25) 00:11:31.631 13464.669 - 13524.247: 92.3549% ( 27) 00:11:31.631 13524.247 - 13583.825: 92.5861% ( 29) 00:11:31.631 13583.825 - 13643.404: 92.8013% ( 27) 00:11:31.631 13643.404 - 13702.982: 93.0325% ( 29) 00:11:31.631 13702.982 - 13762.560: 93.2398% ( 26) 00:11:31.631 13762.560 - 13822.138: 93.4232% ( 23) 00:11:31.631 13822.138 - 13881.716: 93.5746% ( 19) 00:11:31.631 13881.716 - 13941.295: 93.7181% ( 18) 00:11:31.631 13941.295 - 14000.873: 93.8776% ( 20) 00:11:31.631 14000.873 - 14060.451: 94.0290% ( 19) 00:11:31.631 14060.451 - 14120.029: 94.1725% ( 18) 00:11:31.631 14120.029 - 14179.607: 94.3160% ( 18) 00:11:31.631 14179.607 - 14239.185: 94.5073% ( 24) 00:11:31.631 14239.185 - 14298.764: 94.6827% ( 22) 00:11:31.631 14298.764 - 14358.342: 94.9139% ( 29) 00:11:31.631 14358.342 - 14417.920: 95.0893% ( 22) 00:11:31.631 14417.920 - 14477.498: 95.3284% ( 30) 00:11:31.631 14477.498 - 14537.076: 95.5277% ( 25) 00:11:31.631 14537.076 - 14596.655: 95.7510% ( 28) 00:11:31.631 14596.655 - 14656.233: 95.9024% ( 19) 00:11:31.631 14656.233 - 14715.811: 96.1017% ( 25) 00:11:31.631 14715.811 - 14775.389: 96.2771% ( 22) 00:11:31.631 14775.389 - 14834.967: 96.4206% ( 18) 00:11:31.631 14834.967 - 14894.545: 96.5960% ( 22) 00:11:31.631 14894.545 - 14954.124: 96.7235% ( 16) 00:11:31.631 14954.124 - 15013.702: 96.8750% ( 19) 00:11:31.631 15013.702 - 15073.280: 97.0344% ( 20) 00:11:31.631 15073.280 - 15132.858: 97.1939% ( 20) 00:11:31.631 15132.858 - 15192.436: 97.3294% ( 17) 00:11:31.631 15192.436 - 15252.015: 97.4490% ( 15) 00:11:31.631 15252.015 - 15371.171: 97.7121% ( 33) 00:11:31.631 15371.171 - 15490.327: 97.9193% ( 26) 00:11:31.631 15490.327 - 15609.484: 97.9592% ( 5) 00:11:31.631 25976.087 - 26095.244: 97.9672% ( 1) 00:11:31.631 26095.244 - 26214.400: 98.0070% ( 5) 00:11:31.631 26214.400 - 26333.556: 98.0389% ( 4) 00:11:31.631 26333.556 - 26452.713: 98.0788% ( 5) 00:11:31.631 26452.713 - 26571.869: 98.1027% ( 3) 00:11:31.631 26571.869 - 26691.025: 98.1505% ( 6) 00:11:31.631 26691.025 - 26810.182: 98.2063% ( 7) 00:11:31.631 26810.182 - 26929.338: 98.2621% ( 7) 00:11:31.631 26929.338 - 27048.495: 98.3498% ( 11) 00:11:31.631 27048.495 - 27167.651: 98.4136% ( 8) 00:11:31.631 27167.651 - 27286.807: 98.4853% ( 9) 00:11:31.631 27286.807 - 27405.964: 98.5571% ( 9) 00:11:31.631 27405.964 - 27525.120: 98.6288% ( 9) 00:11:31.631 27525.120 - 27644.276: 98.7085% ( 10) 00:11:31.631 27644.276 - 27763.433: 98.7723% ( 8) 00:11:31.631 27763.433 - 27882.589: 98.8042% ( 4) 00:11:31.631 27882.589 - 28001.745: 98.8361% ( 4) 00:11:31.631 28001.745 - 28120.902: 98.8680% ( 4) 00:11:31.631 28120.902 - 28240.058: 98.9078% ( 5) 00:11:31.631 28240.058 - 28359.215: 98.9397% ( 4) 00:11:31.631 28359.215 - 28478.371: 98.9796% ( 5) 00:11:31.631 29312.465 - 29431.622: 98.9876% ( 1) 00:11:31.631 29431.622 - 29550.778: 98.9955% ( 1) 00:11:31.631 29550.778 - 29669.935: 99.0195% ( 3) 00:11:31.631 29669.935 - 29789.091: 99.0354% ( 2) 00:11:31.631 29789.091 - 29908.247: 99.0513% ( 2) 00:11:31.631 29908.247 - 30027.404: 99.0753% ( 3) 00:11:31.631 30027.404 - 30146.560: 99.0992% ( 3) 00:11:31.631 30146.560 - 30265.716: 99.1151% ( 2) 00:11:31.631 30265.716 - 30384.873: 99.1390% ( 3) 00:11:31.631 30384.873 - 30504.029: 99.1629% ( 3) 00:11:31.631 30504.029 - 30742.342: 99.2028% ( 5) 00:11:31.631 30742.342 - 30980.655: 99.2586% ( 7) 00:11:31.631 30980.655 - 31218.967: 99.2985% ( 5) 00:11:31.631 31218.967 - 31457.280: 99.3304% ( 4) 00:11:31.631 31457.280 - 31695.593: 99.3782% ( 6) 00:11:31.631 31695.593 - 31933.905: 99.4340% ( 7) 00:11:31.631 31933.905 - 32172.218: 99.4818% ( 6) 00:11:31.631 32172.218 - 32410.531: 99.4898% ( 1) 00:11:31.631 37176.785 - 37415.098: 99.5376% ( 6) 00:11:31.631 37415.098 - 37653.411: 99.5934% ( 7) 00:11:31.631 37653.411 - 37891.724: 99.6253% ( 4) 00:11:31.631 37891.724 - 38130.036: 99.6732% ( 6) 00:11:31.631 38130.036 - 38368.349: 99.7130% ( 5) 00:11:31.631 38368.349 - 38606.662: 99.7529% ( 5) 00:11:31.631 38606.662 - 38844.975: 99.8087% ( 7) 00:11:31.631 38844.975 - 39083.287: 99.8485% ( 5) 00:11:31.631 39083.287 - 39321.600: 99.9043% ( 7) 00:11:31.631 39321.600 - 39559.913: 99.9442% ( 5) 00:11:31.631 39559.913 - 39798.225: 99.9920% ( 6) 00:11:31.631 39798.225 - 40036.538: 100.0000% ( 1) 00:11:31.631 00:11:31.631 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:31.631 ============================================================================== 00:11:31.631 Range in us Cumulative IO count 00:11:31.631 7238.749 - 7268.538: 0.0080% ( 1) 00:11:31.631 7268.538 - 7298.327: 0.0239% ( 2) 00:11:31.631 7298.327 - 7328.116: 0.0399% ( 2) 00:11:31.631 7328.116 - 7357.905: 0.0558% ( 2) 00:11:31.631 7357.905 - 7387.695: 0.0717% ( 2) 00:11:31.631 7387.695 - 7417.484: 0.0877% ( 2) 00:11:31.631 7417.484 - 7447.273: 0.1036% ( 2) 00:11:31.631 7447.273 - 7477.062: 0.1196% ( 2) 00:11:31.631 7477.062 - 7506.851: 0.1355% ( 2) 00:11:31.631 7506.851 - 7536.640: 0.1594% ( 3) 00:11:31.631 7536.640 - 7566.429: 0.1754% ( 2) 00:11:31.631 7566.429 - 7596.218: 0.1913% ( 2) 00:11:31.631 7596.218 - 7626.007: 0.2152% ( 3) 00:11:31.631 7626.007 - 7685.585: 0.3029% ( 11) 00:11:31.631 7685.585 - 7745.164: 0.4145% ( 14) 00:11:31.631 7745.164 - 7804.742: 0.7733% ( 45) 00:11:31.631 7804.742 - 7864.320: 1.3791% ( 76) 00:11:31.631 7864.320 - 7923.898: 2.1445% ( 96) 00:11:31.631 7923.898 - 7983.476: 3.0931% ( 119) 00:11:31.631 7983.476 - 8043.055: 4.1773% ( 136) 00:11:31.631 8043.055 - 8102.633: 5.3890% ( 152) 00:11:31.631 8102.633 - 8162.211: 6.7921% ( 176) 00:11:31.631 8162.211 - 8221.789: 8.2430% ( 182) 00:11:31.632 8221.789 - 8281.367: 9.7895% ( 194) 00:11:31.632 8281.367 - 8340.945: 11.4636% ( 210) 00:11:31.632 8340.945 - 8400.524: 13.1378% ( 210) 00:11:31.632 8400.524 - 8460.102: 14.9075% ( 222) 00:11:31.632 8460.102 - 8519.680: 16.7012% ( 225) 00:11:31.632 8519.680 - 8579.258: 18.6464% ( 244) 00:11:31.632 8579.258 - 8638.836: 20.5517% ( 239) 00:11:31.632 8638.836 - 8698.415: 22.3932% ( 231) 00:11:31.632 8698.415 - 8757.993: 24.0274% ( 205) 00:11:31.632 8757.993 - 8817.571: 25.5660% ( 193) 00:11:31.632 8817.571 - 8877.149: 27.1445% ( 198) 00:11:31.632 8877.149 - 8936.727: 28.8186% ( 210) 00:11:31.632 8936.727 - 8996.305: 30.6840% ( 234) 00:11:31.632 8996.305 - 9055.884: 32.7487% ( 259) 00:11:31.632 9055.884 - 9115.462: 35.0606% ( 290) 00:11:31.632 9115.462 - 9175.040: 37.6355% ( 323) 00:11:31.632 9175.040 - 9234.618: 40.2503% ( 328) 00:11:31.632 9234.618 - 9294.196: 42.9289% ( 336) 00:11:31.632 9294.196 - 9353.775: 45.5517% ( 329) 00:11:31.632 9353.775 - 9413.353: 48.1983% ( 332) 00:11:31.632 9413.353 - 9472.931: 50.8052% ( 327) 00:11:31.632 9472.931 - 9532.509: 53.5236% ( 341) 00:11:31.632 9532.509 - 9592.087: 56.2022% ( 336) 00:11:31.632 9592.087 - 9651.665: 58.8329% ( 330) 00:11:31.632 9651.665 - 9711.244: 61.5354% ( 339) 00:11:31.632 9711.244 - 9770.822: 64.0466% ( 315) 00:11:31.632 9770.822 - 9830.400: 66.6215% ( 323) 00:11:31.632 9830.400 - 9889.978: 68.9015% ( 286) 00:11:31.632 9889.978 - 9949.556: 71.0300% ( 267) 00:11:31.632 9949.556 - 10009.135: 72.8795% ( 232) 00:11:31.632 10009.135 - 10068.713: 74.3463% ( 184) 00:11:31.632 10068.713 - 10128.291: 75.3747% ( 129) 00:11:31.632 10128.291 - 10187.869: 76.1639% ( 99) 00:11:31.632 10187.869 - 10247.447: 76.8415% ( 85) 00:11:31.632 10247.447 - 10307.025: 77.3358% ( 62) 00:11:31.632 10307.025 - 10366.604: 77.7902% ( 57) 00:11:31.632 10366.604 - 10426.182: 78.1170% ( 41) 00:11:31.632 10426.182 - 10485.760: 78.3721% ( 32) 00:11:31.632 10485.760 - 10545.338: 78.6671% ( 37) 00:11:31.632 10545.338 - 10604.916: 79.0019% ( 42) 00:11:31.632 10604.916 - 10664.495: 79.3367% ( 42) 00:11:31.632 10664.495 - 10724.073: 79.7672% ( 54) 00:11:31.632 10724.073 - 10783.651: 80.1738% ( 51) 00:11:31.632 10783.651 - 10843.229: 80.5724% ( 50) 00:11:31.632 10843.229 - 10902.807: 80.9869% ( 52) 00:11:31.632 10902.807 - 10962.385: 81.4094% ( 53) 00:11:31.632 10962.385 - 11021.964: 81.8638% ( 57) 00:11:31.632 11021.964 - 11081.542: 82.2465% ( 48) 00:11:31.632 11081.542 - 11141.120: 82.5733% ( 41) 00:11:31.632 11141.120 - 11200.698: 82.9002% ( 41) 00:11:31.632 11200.698 - 11260.276: 83.1872% ( 36) 00:11:31.632 11260.276 - 11319.855: 83.4821% ( 37) 00:11:31.632 11319.855 - 11379.433: 83.7771% ( 37) 00:11:31.632 11379.433 - 11439.011: 84.1040% ( 41) 00:11:31.632 11439.011 - 11498.589: 84.4308% ( 41) 00:11:31.632 11498.589 - 11558.167: 84.7258% ( 37) 00:11:31.632 11558.167 - 11617.745: 84.9490% ( 28) 00:11:31.632 11617.745 - 11677.324: 85.1961% ( 31) 00:11:31.632 11677.324 - 11736.902: 85.4273% ( 29) 00:11:31.632 11736.902 - 11796.480: 85.6665% ( 30) 00:11:31.632 11796.480 - 11856.058: 85.8658% ( 25) 00:11:31.632 11856.058 - 11915.636: 86.0571% ( 24) 00:11:31.632 11915.636 - 11975.215: 86.2564% ( 25) 00:11:31.632 11975.215 - 12034.793: 86.4477% ( 24) 00:11:31.632 12034.793 - 12094.371: 86.6311% ( 23) 00:11:31.632 12094.371 - 12153.949: 86.7985% ( 21) 00:11:31.632 12153.949 - 12213.527: 86.9659% ( 21) 00:11:31.632 12213.527 - 12273.105: 87.1333% ( 21) 00:11:31.632 12273.105 - 12332.684: 87.3166% ( 23) 00:11:31.632 12332.684 - 12392.262: 87.5000% ( 23) 00:11:31.632 12392.262 - 12451.840: 87.7152% ( 27) 00:11:31.632 12451.840 - 12511.418: 87.9464% ( 29) 00:11:31.632 12511.418 - 12570.996: 88.1696% ( 28) 00:11:31.632 12570.996 - 12630.575: 88.3689% ( 25) 00:11:31.632 12630.575 - 12690.153: 88.6320% ( 33) 00:11:31.632 12690.153 - 12749.731: 88.8791% ( 31) 00:11:31.632 12749.731 - 12809.309: 89.1980% ( 40) 00:11:31.632 12809.309 - 12868.887: 89.4212% ( 28) 00:11:31.632 12868.887 - 12928.465: 89.6684% ( 31) 00:11:31.632 12928.465 - 12988.044: 89.8996% ( 29) 00:11:31.632 12988.044 - 13047.622: 90.1706% ( 34) 00:11:31.632 13047.622 - 13107.200: 90.4018% ( 29) 00:11:31.632 13107.200 - 13166.778: 90.6728% ( 34) 00:11:31.632 13166.778 - 13226.356: 90.9439% ( 34) 00:11:31.632 13226.356 - 13285.935: 91.2070% ( 33) 00:11:31.632 13285.935 - 13345.513: 91.4860% ( 35) 00:11:31.632 13345.513 - 13405.091: 91.7331% ( 31) 00:11:31.632 13405.091 - 13464.669: 91.9723% ( 30) 00:11:31.632 13464.669 - 13524.247: 92.2194% ( 31) 00:11:31.632 13524.247 - 13583.825: 92.4267% ( 26) 00:11:31.632 13583.825 - 13643.404: 92.6419% ( 27) 00:11:31.632 13643.404 - 13702.982: 92.8731% ( 29) 00:11:31.632 13702.982 - 13762.560: 93.0724% ( 25) 00:11:31.632 13762.560 - 13822.138: 93.2876% ( 27) 00:11:31.632 13822.138 - 13881.716: 93.4869% ( 25) 00:11:31.632 13881.716 - 13941.295: 93.6384% ( 19) 00:11:31.632 13941.295 - 14000.873: 93.7899% ( 19) 00:11:31.632 14000.873 - 14060.451: 93.9493% ( 20) 00:11:31.632 14060.451 - 14120.029: 94.1167% ( 21) 00:11:31.632 14120.029 - 14179.607: 94.2761% ( 20) 00:11:31.632 14179.607 - 14239.185: 94.4436% ( 21) 00:11:31.632 14239.185 - 14298.764: 94.6349% ( 24) 00:11:31.632 14298.764 - 14358.342: 94.7864% ( 19) 00:11:31.632 14358.342 - 14417.920: 94.9538% ( 21) 00:11:31.632 14417.920 - 14477.498: 95.1371% ( 23) 00:11:31.632 14477.498 - 14537.076: 95.3842% ( 31) 00:11:31.632 14537.076 - 14596.655: 95.6314% ( 31) 00:11:31.632 14596.655 - 14656.233: 95.8705% ( 30) 00:11:31.632 14656.233 - 14715.811: 96.0619% ( 24) 00:11:31.632 14715.811 - 14775.389: 96.2293% ( 21) 00:11:31.632 14775.389 - 14834.967: 96.4365% ( 26) 00:11:31.632 14834.967 - 14894.545: 96.6279% ( 24) 00:11:31.632 14894.545 - 14954.124: 96.8112% ( 23) 00:11:31.632 14954.124 - 15013.702: 96.9946% ( 23) 00:11:31.632 15013.702 - 15073.280: 97.1859% ( 24) 00:11:31.632 15073.280 - 15132.858: 97.3214% ( 17) 00:11:31.632 15132.858 - 15192.436: 97.4490% ( 16) 00:11:31.632 15192.436 - 15252.015: 97.5606% ( 14) 00:11:31.632 15252.015 - 15371.171: 97.7679% ( 26) 00:11:31.632 15371.171 - 15490.327: 97.9034% ( 17) 00:11:31.632 15490.327 - 15609.484: 97.9432% ( 5) 00:11:31.632 15609.484 - 15728.640: 97.9592% ( 2) 00:11:31.632 24784.524 - 24903.680: 97.9990% ( 5) 00:11:31.632 24903.680 - 25022.836: 98.0548% ( 7) 00:11:31.632 25022.836 - 25141.993: 98.1027% ( 6) 00:11:31.632 25141.993 - 25261.149: 98.1425% ( 5) 00:11:31.632 25261.149 - 25380.305: 98.1904% ( 6) 00:11:31.632 25380.305 - 25499.462: 98.2382% ( 6) 00:11:31.632 25499.462 - 25618.618: 98.2781% ( 5) 00:11:31.632 25618.618 - 25737.775: 98.3339% ( 7) 00:11:31.632 25737.775 - 25856.931: 98.3737% ( 5) 00:11:31.632 25856.931 - 25976.087: 98.4216% ( 6) 00:11:31.632 25976.087 - 26095.244: 98.4614% ( 5) 00:11:31.632 26095.244 - 26214.400: 98.4694% ( 1) 00:11:31.632 26452.713 - 26571.869: 98.5092% ( 5) 00:11:31.632 26571.869 - 26691.025: 98.5651% ( 7) 00:11:31.632 26691.025 - 26810.182: 98.6129% ( 6) 00:11:31.632 26810.182 - 26929.338: 98.6607% ( 6) 00:11:31.632 26929.338 - 27048.495: 98.7006% ( 5) 00:11:31.632 27048.495 - 27167.651: 98.7404% ( 5) 00:11:31.632 27167.651 - 27286.807: 98.7803% ( 5) 00:11:31.632 27286.807 - 27405.964: 98.8202% ( 5) 00:11:31.632 27405.964 - 27525.120: 98.8839% ( 8) 00:11:31.632 27525.120 - 27644.276: 98.9557% ( 9) 00:11:31.632 27644.276 - 27763.433: 99.0274% ( 9) 00:11:31.632 27763.433 - 27882.589: 99.0673% ( 5) 00:11:31.632 27882.589 - 28001.745: 99.0912% ( 3) 00:11:31.632 28001.745 - 28120.902: 99.1151% ( 3) 00:11:31.632 28120.902 - 28240.058: 99.1390% ( 3) 00:11:31.632 28240.058 - 28359.215: 99.1629% ( 3) 00:11:31.632 28359.215 - 28478.371: 99.1869% ( 3) 00:11:31.632 28478.371 - 28597.527: 99.2108% ( 3) 00:11:31.632 28597.527 - 28716.684: 99.2347% ( 3) 00:11:31.632 28716.684 - 28835.840: 99.2586% ( 3) 00:11:31.632 28835.840 - 28954.996: 99.2825% ( 3) 00:11:31.632 28954.996 - 29074.153: 99.3064% ( 3) 00:11:31.632 29074.153 - 29193.309: 99.3304% ( 3) 00:11:31.632 29193.309 - 29312.465: 99.3543% ( 3) 00:11:31.632 29312.465 - 29431.622: 99.3862% ( 4) 00:11:31.632 29431.622 - 29550.778: 99.4021% ( 2) 00:11:31.632 29550.778 - 29669.935: 99.4260% ( 3) 00:11:31.632 29669.935 - 29789.091: 99.4499% ( 3) 00:11:31.632 29789.091 - 29908.247: 99.4818% ( 4) 00:11:31.632 29908.247 - 30027.404: 99.4898% ( 1) 00:11:31.632 34793.658 - 35031.971: 99.5057% ( 2) 00:11:31.632 35031.971 - 35270.284: 99.5615% ( 7) 00:11:31.632 35270.284 - 35508.596: 99.6094% ( 6) 00:11:31.632 35508.596 - 35746.909: 99.6572% ( 6) 00:11:31.632 35746.909 - 35985.222: 99.6971% ( 5) 00:11:31.632 35985.222 - 36223.535: 99.7449% ( 6) 00:11:31.632 36223.535 - 36461.847: 99.8007% ( 7) 00:11:31.632 36461.847 - 36700.160: 99.8485% ( 6) 00:11:31.632 36700.160 - 36938.473: 99.8964% ( 6) 00:11:31.632 36938.473 - 37176.785: 99.9442% ( 6) 00:11:31.632 37176.785 - 37415.098: 99.9920% ( 6) 00:11:31.632 37415.098 - 37653.411: 100.0000% ( 1) 00:11:31.632 00:11:31.632 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:31.632 ============================================================================== 00:11:31.632 Range in us Cumulative IO count 00:11:31.632 7566.429 - 7596.218: 0.0080% ( 1) 00:11:31.632 7596.218 - 7626.007: 0.0239% ( 2) 00:11:31.632 7626.007 - 7685.585: 0.0558% ( 4) 00:11:31.632 7685.585 - 7745.164: 0.1674% ( 14) 00:11:31.632 7745.164 - 7804.742: 0.4624% ( 37) 00:11:31.632 7804.742 - 7864.320: 0.9566% ( 62) 00:11:31.632 7864.320 - 7923.898: 1.6342% ( 85) 00:11:31.632 7923.898 - 7983.476: 2.5989% ( 121) 00:11:31.632 7983.476 - 8043.055: 3.7548% ( 145) 00:11:31.632 8043.055 - 8102.633: 4.9745% ( 153) 00:11:31.632 8102.633 - 8162.211: 6.4094% ( 180) 00:11:31.632 8162.211 - 8221.789: 7.9161% ( 189) 00:11:31.632 8221.789 - 8281.367: 9.5026% ( 199) 00:11:31.632 8281.367 - 8340.945: 11.1368% ( 205) 00:11:31.632 8340.945 - 8400.524: 12.8587% ( 216) 00:11:31.632 8400.524 - 8460.102: 14.7321% ( 235) 00:11:31.632 8460.102 - 8519.680: 16.6374% ( 239) 00:11:31.632 8519.680 - 8579.258: 18.5666% ( 242) 00:11:31.632 8579.258 - 8638.836: 20.5835% ( 253) 00:11:31.632 8638.836 - 8698.415: 22.4729% ( 237) 00:11:31.632 8698.415 - 8757.993: 24.1071% ( 205) 00:11:31.632 8757.993 - 8817.571: 25.7015% ( 200) 00:11:31.633 8817.571 - 8877.149: 27.3119% ( 202) 00:11:31.633 8877.149 - 8936.727: 28.9302% ( 203) 00:11:31.633 8936.727 - 8996.305: 30.7557% ( 229) 00:11:31.633 8996.305 - 9055.884: 32.8922% ( 268) 00:11:31.633 9055.884 - 9115.462: 35.2439% ( 295) 00:11:31.633 9115.462 - 9175.040: 37.7471% ( 314) 00:11:31.633 9175.040 - 9234.618: 40.3141% ( 322) 00:11:31.633 9234.618 - 9294.196: 43.0006% ( 337) 00:11:31.633 9294.196 - 9353.775: 45.7191% ( 341) 00:11:31.633 9353.775 - 9413.353: 48.4614% ( 344) 00:11:31.633 9413.353 - 9472.931: 51.1719% ( 340) 00:11:31.633 9472.931 - 9532.509: 53.8983% ( 342) 00:11:31.633 9532.509 - 9592.087: 56.4971% ( 326) 00:11:31.633 9592.087 - 9651.665: 59.1119% ( 328) 00:11:31.633 9651.665 - 9711.244: 61.5992% ( 312) 00:11:31.633 9711.244 - 9770.822: 64.1103% ( 315) 00:11:31.633 9770.822 - 9830.400: 66.5737% ( 309) 00:11:31.633 9830.400 - 9889.978: 68.9254% ( 295) 00:11:31.633 9889.978 - 9949.556: 71.1017% ( 273) 00:11:31.633 9949.556 - 10009.135: 73.0070% ( 239) 00:11:31.633 10009.135 - 10068.713: 74.6173% ( 202) 00:11:31.633 10068.713 - 10128.291: 75.8689% ( 157) 00:11:31.633 10128.291 - 10187.869: 76.8734% ( 126) 00:11:31.633 10187.869 - 10247.447: 77.7423% ( 109) 00:11:31.633 10247.447 - 10307.025: 78.4200% ( 85) 00:11:31.633 10307.025 - 10366.604: 78.9461% ( 66) 00:11:31.633 10366.604 - 10426.182: 79.3686% ( 53) 00:11:31.633 10426.182 - 10485.760: 79.7114% ( 43) 00:11:31.633 10485.760 - 10545.338: 80.0462% ( 42) 00:11:31.633 10545.338 - 10604.916: 80.3890% ( 43) 00:11:31.633 10604.916 - 10664.495: 80.7318% ( 43) 00:11:31.633 10664.495 - 10724.073: 81.0746% ( 43) 00:11:31.633 10724.073 - 10783.651: 81.4094% ( 42) 00:11:31.633 10783.651 - 10843.229: 81.7682% ( 45) 00:11:31.633 10843.229 - 10902.807: 82.1189% ( 44) 00:11:31.633 10902.807 - 10962.385: 82.4458% ( 41) 00:11:31.633 10962.385 - 11021.964: 82.7806% ( 42) 00:11:31.633 11021.964 - 11081.542: 83.0517% ( 34) 00:11:31.633 11081.542 - 11141.120: 83.3147% ( 33) 00:11:31.633 11141.120 - 11200.698: 83.5300% ( 27) 00:11:31.633 11200.698 - 11260.276: 83.6894% ( 20) 00:11:31.633 11260.276 - 11319.855: 83.8249% ( 17) 00:11:31.633 11319.855 - 11379.433: 83.9605% ( 17) 00:11:31.633 11379.433 - 11439.011: 84.0800% ( 15) 00:11:31.633 11439.011 - 11498.589: 84.2235% ( 18) 00:11:31.633 11498.589 - 11558.167: 84.3511% ( 16) 00:11:31.633 11558.167 - 11617.745: 84.4627% ( 14) 00:11:31.633 11617.745 - 11677.324: 84.5663% ( 13) 00:11:31.633 11677.324 - 11736.902: 84.6620% ( 12) 00:11:31.633 11736.902 - 11796.480: 84.7497% ( 11) 00:11:31.633 11796.480 - 11856.058: 84.8374% ( 11) 00:11:31.633 11856.058 - 11915.636: 84.9649% ( 16) 00:11:31.633 11915.636 - 11975.215: 85.1004% ( 17) 00:11:31.633 11975.215 - 12034.793: 85.2758% ( 22) 00:11:31.633 12034.793 - 12094.371: 85.4672% ( 24) 00:11:31.633 12094.371 - 12153.949: 85.6665% ( 25) 00:11:31.633 12153.949 - 12213.527: 85.8897% ( 28) 00:11:31.633 12213.527 - 12273.105: 86.1129% ( 28) 00:11:31.633 12273.105 - 12332.684: 86.4158% ( 38) 00:11:31.633 12332.684 - 12392.262: 86.6789% ( 33) 00:11:31.633 12392.262 - 12451.840: 86.9579% ( 35) 00:11:31.633 12451.840 - 12511.418: 87.2449% ( 36) 00:11:31.633 12511.418 - 12570.996: 87.5558% ( 39) 00:11:31.633 12570.996 - 12630.575: 87.8667% ( 39) 00:11:31.633 12630.575 - 12690.153: 88.2175% ( 44) 00:11:31.633 12690.153 - 12749.731: 88.5364% ( 40) 00:11:31.633 12749.731 - 12809.309: 88.8951% ( 45) 00:11:31.633 12809.309 - 12868.887: 89.1980% ( 38) 00:11:31.633 12868.887 - 12928.465: 89.4850% ( 36) 00:11:31.633 12928.465 - 12988.044: 89.8039% ( 40) 00:11:31.633 12988.044 - 13047.622: 90.1387% ( 42) 00:11:31.633 13047.622 - 13107.200: 90.4416% ( 38) 00:11:31.633 13107.200 - 13166.778: 90.7446% ( 38) 00:11:31.633 13166.778 - 13226.356: 91.0475% ( 38) 00:11:31.633 13226.356 - 13285.935: 91.3345% ( 36) 00:11:31.633 13285.935 - 13345.513: 91.6534% ( 40) 00:11:31.633 13345.513 - 13405.091: 91.9643% ( 39) 00:11:31.633 13405.091 - 13464.669: 92.2194% ( 32) 00:11:31.633 13464.669 - 13524.247: 92.4665% ( 31) 00:11:31.633 13524.247 - 13583.825: 92.7216% ( 32) 00:11:31.633 13583.825 - 13643.404: 92.9528% ( 29) 00:11:31.633 13643.404 - 13702.982: 93.1999% ( 31) 00:11:31.633 13702.982 - 13762.560: 93.4391% ( 30) 00:11:31.633 13762.560 - 13822.138: 93.6942% ( 32) 00:11:31.633 13822.138 - 13881.716: 93.8696% ( 22) 00:11:31.633 13881.716 - 13941.295: 94.0768% ( 26) 00:11:31.633 13941.295 - 14000.873: 94.2921% ( 27) 00:11:31.633 14000.873 - 14060.451: 94.4834% ( 24) 00:11:31.633 14060.451 - 14120.029: 94.6508% ( 21) 00:11:31.633 14120.029 - 14179.607: 94.8023% ( 19) 00:11:31.633 14179.607 - 14239.185: 94.9857% ( 23) 00:11:31.633 14239.185 - 14298.764: 95.1451% ( 20) 00:11:31.633 14298.764 - 14358.342: 95.3444% ( 25) 00:11:31.633 14358.342 - 14417.920: 95.5118% ( 21) 00:11:31.633 14417.920 - 14477.498: 95.6633% ( 19) 00:11:31.633 14477.498 - 14537.076: 95.7908% ( 16) 00:11:31.633 14537.076 - 14596.655: 95.9343% ( 18) 00:11:31.633 14596.655 - 14656.233: 96.1017% ( 21) 00:11:31.633 14656.233 - 14715.811: 96.2771% ( 22) 00:11:31.633 14715.811 - 14775.389: 96.4365% ( 20) 00:11:31.633 14775.389 - 14834.967: 96.5800% ( 18) 00:11:31.633 14834.967 - 14894.545: 96.7235% ( 18) 00:11:31.633 14894.545 - 14954.124: 96.8431% ( 15) 00:11:31.633 14954.124 - 15013.702: 96.9627% ( 15) 00:11:31.633 15013.702 - 15073.280: 97.0823% ( 15) 00:11:31.633 15073.280 - 15132.858: 97.2018% ( 15) 00:11:31.633 15132.858 - 15192.436: 97.3135% ( 14) 00:11:31.633 15192.436 - 15252.015: 97.4171% ( 13) 00:11:31.633 15252.015 - 15371.171: 97.5686% ( 19) 00:11:31.633 15371.171 - 15490.327: 97.7121% ( 18) 00:11:31.633 15490.327 - 15609.484: 97.8157% ( 13) 00:11:31.633 15609.484 - 15728.640: 97.9034% ( 11) 00:11:31.633 15728.640 - 15847.796: 97.9432% ( 5) 00:11:31.633 15847.796 - 15966.953: 97.9592% ( 2) 00:11:31.633 24903.680 - 25022.836: 98.0070% ( 6) 00:11:31.633 25022.836 - 25141.993: 98.0469% ( 5) 00:11:31.633 25141.993 - 25261.149: 98.0947% ( 6) 00:11:31.633 25261.149 - 25380.305: 98.1585% ( 8) 00:11:31.633 25380.305 - 25499.462: 98.2223% ( 8) 00:11:31.633 25499.462 - 25618.618: 98.2940% ( 9) 00:11:31.633 25618.618 - 25737.775: 98.3578% ( 8) 00:11:31.633 25737.775 - 25856.931: 98.4216% ( 8) 00:11:31.633 25856.931 - 25976.087: 98.4853% ( 8) 00:11:31.633 25976.087 - 26095.244: 98.5411% ( 7) 00:11:31.633 26095.244 - 26214.400: 98.6129% ( 9) 00:11:31.633 26214.400 - 26333.556: 98.6767% ( 8) 00:11:31.633 26333.556 - 26452.713: 98.7006% ( 3) 00:11:31.633 26452.713 - 26571.869: 98.7404% ( 5) 00:11:31.633 26571.869 - 26691.025: 98.7962% ( 7) 00:11:31.633 26691.025 - 26810.182: 98.8680% ( 9) 00:11:31.633 26810.182 - 26929.338: 98.9477% ( 10) 00:11:31.633 26929.338 - 27048.495: 99.0035% ( 7) 00:11:31.633 27048.495 - 27167.651: 99.0673% ( 8) 00:11:31.633 27167.651 - 27286.807: 99.1311% ( 8) 00:11:31.633 27286.807 - 27405.964: 99.1948% ( 8) 00:11:31.633 27405.964 - 27525.120: 99.2586% ( 8) 00:11:31.633 27525.120 - 27644.276: 99.3144% ( 7) 00:11:31.633 27644.276 - 27763.433: 99.3702% ( 7) 00:11:31.633 27763.433 - 27882.589: 99.4021% ( 4) 00:11:31.633 27882.589 - 28001.745: 99.4420% ( 5) 00:11:31.633 28001.745 - 28120.902: 99.4818% ( 5) 00:11:31.633 28120.902 - 28240.058: 99.4898% ( 1) 00:11:31.633 32648.844 - 32887.156: 99.5137% ( 3) 00:11:31.633 32887.156 - 33125.469: 99.5536% ( 5) 00:11:31.633 33125.469 - 33363.782: 99.5934% ( 5) 00:11:31.633 33363.782 - 33602.095: 99.6333% ( 5) 00:11:31.633 33602.095 - 33840.407: 99.6891% ( 7) 00:11:31.633 33840.407 - 34078.720: 99.7290% ( 5) 00:11:31.633 34078.720 - 34317.033: 99.7848% ( 7) 00:11:31.633 34317.033 - 34555.345: 99.8326% ( 6) 00:11:31.633 34555.345 - 34793.658: 99.8804% ( 6) 00:11:31.633 34793.658 - 35031.971: 99.9283% ( 6) 00:11:31.633 35031.971 - 35270.284: 99.9841% ( 7) 00:11:31.633 35270.284 - 35508.596: 100.0000% ( 2) 00:11:31.633 00:11:31.633 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:31.633 ============================================================================== 00:11:31.633 Range in us Cumulative IO count 00:11:31.633 7596.218 - 7626.007: 0.0080% ( 1) 00:11:31.633 7626.007 - 7685.585: 0.0399% ( 4) 00:11:31.633 7685.585 - 7745.164: 0.1674% ( 16) 00:11:31.633 7745.164 - 7804.742: 0.4624% ( 37) 00:11:31.633 7804.742 - 7864.320: 1.0045% ( 68) 00:11:31.633 7864.320 - 7923.898: 1.6661% ( 83) 00:11:31.633 7923.898 - 7983.476: 2.5909% ( 116) 00:11:31.633 7983.476 - 8043.055: 3.7309% ( 143) 00:11:31.633 8043.055 - 8102.633: 5.0223% ( 162) 00:11:31.633 8102.633 - 8162.211: 6.4174% ( 175) 00:11:31.633 8162.211 - 8221.789: 7.9002% ( 186) 00:11:31.633 8221.789 - 8281.367: 9.5026% ( 201) 00:11:31.633 8281.367 - 8340.945: 11.1926% ( 212) 00:11:31.633 8340.945 - 8400.524: 12.8986% ( 214) 00:11:31.633 8400.524 - 8460.102: 14.6843% ( 224) 00:11:31.633 8460.102 - 8519.680: 16.5896% ( 239) 00:11:31.633 8519.680 - 8579.258: 18.5188% ( 242) 00:11:31.633 8579.258 - 8638.836: 20.4002% ( 236) 00:11:31.633 8638.836 - 8698.415: 22.2018% ( 226) 00:11:31.633 8698.415 - 8757.993: 23.9238% ( 216) 00:11:31.633 8757.993 - 8817.571: 25.5182% ( 200) 00:11:31.633 8817.571 - 8877.149: 27.1365% ( 203) 00:11:31.633 8877.149 - 8936.727: 28.7388% ( 201) 00:11:31.633 8936.727 - 8996.305: 30.5405% ( 226) 00:11:31.633 8996.305 - 9055.884: 32.6690% ( 267) 00:11:31.633 9055.884 - 9115.462: 35.0207% ( 295) 00:11:31.633 9115.462 - 9175.040: 37.4601% ( 306) 00:11:31.633 9175.040 - 9234.618: 40.1387% ( 336) 00:11:31.633 9234.618 - 9294.196: 42.7455% ( 327) 00:11:31.633 9294.196 - 9353.775: 45.3683% ( 329) 00:11:31.633 9353.775 - 9413.353: 48.0947% ( 342) 00:11:31.633 9413.353 - 9472.931: 50.8450% ( 345) 00:11:31.633 9472.931 - 9532.509: 53.5077% ( 334) 00:11:31.633 9532.509 - 9592.087: 56.1942% ( 337) 00:11:31.633 9592.087 - 9651.665: 58.9126% ( 341) 00:11:31.633 9651.665 - 9711.244: 61.5354% ( 329) 00:11:31.633 9711.244 - 9770.822: 64.1183% ( 324) 00:11:31.633 9770.822 - 9830.400: 66.6454% ( 317) 00:11:31.633 9830.400 - 9889.978: 69.1087% ( 309) 00:11:31.633 9889.978 - 9949.556: 71.4525% ( 294) 00:11:31.633 9949.556 - 10009.135: 73.4216% ( 247) 00:11:31.633 10009.135 - 10068.713: 75.1036% ( 211) 00:11:31.633 10068.713 - 10128.291: 76.3871% ( 161) 00:11:31.634 10128.291 - 10187.869: 77.3677% ( 123) 00:11:31.634 10187.869 - 10247.447: 78.1808% ( 102) 00:11:31.634 10247.447 - 10307.025: 78.8265% ( 81) 00:11:31.634 10307.025 - 10366.604: 79.3128% ( 61) 00:11:31.634 10366.604 - 10426.182: 79.6955% ( 48) 00:11:31.634 10426.182 - 10485.760: 80.0383% ( 43) 00:11:31.634 10485.760 - 10545.338: 80.3731% ( 42) 00:11:31.634 10545.338 - 10604.916: 80.6521% ( 35) 00:11:31.634 10604.916 - 10664.495: 80.9710% ( 40) 00:11:31.634 10664.495 - 10724.073: 81.3058% ( 42) 00:11:31.634 10724.073 - 10783.651: 81.6247% ( 40) 00:11:31.634 10783.651 - 10843.229: 81.9515% ( 41) 00:11:31.634 10843.229 - 10902.807: 82.2624% ( 39) 00:11:31.634 10902.807 - 10962.385: 82.5494% ( 36) 00:11:31.634 10962.385 - 11021.964: 82.8125% ( 33) 00:11:31.634 11021.964 - 11081.542: 83.0437% ( 29) 00:11:31.634 11081.542 - 11141.120: 83.2510% ( 26) 00:11:31.634 11141.120 - 11200.698: 83.4503% ( 25) 00:11:31.634 11200.698 - 11260.276: 83.6256% ( 22) 00:11:31.634 11260.276 - 11319.855: 83.8170% ( 24) 00:11:31.634 11319.855 - 11379.433: 83.9525% ( 17) 00:11:31.634 11379.433 - 11439.011: 84.0880% ( 17) 00:11:31.634 11439.011 - 11498.589: 84.1996% ( 14) 00:11:31.634 11498.589 - 11558.167: 84.3272% ( 16) 00:11:31.634 11558.167 - 11617.745: 84.4308% ( 13) 00:11:31.634 11617.745 - 11677.324: 84.5105% ( 10) 00:11:31.634 11677.324 - 11736.902: 84.5823% ( 9) 00:11:31.634 11736.902 - 11796.480: 84.6700% ( 11) 00:11:31.634 11796.480 - 11856.058: 84.7975% ( 16) 00:11:31.634 11856.058 - 11915.636: 84.9649% ( 21) 00:11:31.634 11915.636 - 11975.215: 85.1483% ( 23) 00:11:31.634 11975.215 - 12034.793: 85.3077% ( 20) 00:11:31.634 12034.793 - 12094.371: 85.4911% ( 23) 00:11:31.634 12094.371 - 12153.949: 85.6904% ( 25) 00:11:31.634 12153.949 - 12213.527: 85.9216% ( 29) 00:11:31.634 12213.527 - 12273.105: 86.1129% ( 24) 00:11:31.634 12273.105 - 12332.684: 86.3441% ( 29) 00:11:31.634 12332.684 - 12392.262: 86.5673% ( 28) 00:11:31.634 12392.262 - 12451.840: 86.8622% ( 37) 00:11:31.634 12451.840 - 12511.418: 87.1492% ( 36) 00:11:31.634 12511.418 - 12570.996: 87.4522% ( 38) 00:11:31.634 12570.996 - 12630.575: 87.7631% ( 39) 00:11:31.634 12630.575 - 12690.153: 88.1298% ( 46) 00:11:31.634 12690.153 - 12749.731: 88.5364% ( 51) 00:11:31.634 12749.731 - 12809.309: 88.9429% ( 51) 00:11:31.634 12809.309 - 12868.887: 89.2618% ( 40) 00:11:31.634 12868.887 - 12928.465: 89.5886% ( 41) 00:11:31.634 12928.465 - 12988.044: 89.8996% ( 39) 00:11:31.634 12988.044 - 13047.622: 90.2184% ( 40) 00:11:31.634 13047.622 - 13107.200: 90.5533% ( 42) 00:11:31.634 13107.200 - 13166.778: 90.9040% ( 44) 00:11:31.634 13166.778 - 13226.356: 91.2787% ( 47) 00:11:31.634 13226.356 - 13285.935: 91.6454% ( 46) 00:11:31.634 13285.935 - 13345.513: 92.0041% ( 45) 00:11:31.634 13345.513 - 13405.091: 92.2592% ( 32) 00:11:31.634 13405.091 - 13464.669: 92.4984% ( 30) 00:11:31.634 13464.669 - 13524.247: 92.7455% ( 31) 00:11:31.634 13524.247 - 13583.825: 92.9847% ( 30) 00:11:31.634 13583.825 - 13643.404: 93.1680% ( 23) 00:11:31.634 13643.404 - 13702.982: 93.3833% ( 27) 00:11:31.634 13702.982 - 13762.560: 93.5906% ( 26) 00:11:31.634 13762.560 - 13822.138: 93.8217% ( 29) 00:11:31.634 13822.138 - 13881.716: 94.0370% ( 27) 00:11:31.634 13881.716 - 13941.295: 94.2044% ( 21) 00:11:31.634 13941.295 - 14000.873: 94.3638% ( 20) 00:11:31.634 14000.873 - 14060.451: 94.5392% ( 22) 00:11:31.634 14060.451 - 14120.029: 94.6747% ( 17) 00:11:31.634 14120.029 - 14179.607: 94.8023% ( 16) 00:11:31.634 14179.607 - 14239.185: 94.9458% ( 18) 00:11:31.634 14239.185 - 14298.764: 95.0654% ( 15) 00:11:31.634 14298.764 - 14358.342: 95.2248% ( 20) 00:11:31.634 14358.342 - 14417.920: 95.3842% ( 20) 00:11:31.634 14417.920 - 14477.498: 95.5357% ( 19) 00:11:31.634 14477.498 - 14537.076: 95.7111% ( 22) 00:11:31.634 14537.076 - 14596.655: 95.8865% ( 22) 00:11:31.634 14596.655 - 14656.233: 96.0858% ( 25) 00:11:31.634 14656.233 - 14715.811: 96.3010% ( 27) 00:11:31.634 14715.811 - 14775.389: 96.4684% ( 21) 00:11:31.634 14775.389 - 14834.967: 96.6199% ( 19) 00:11:31.634 14834.967 - 14894.545: 96.7714% ( 19) 00:11:31.634 14894.545 - 14954.124: 96.8909% ( 15) 00:11:31.634 14954.124 - 15013.702: 97.0185% ( 16) 00:11:31.634 15013.702 - 15073.280: 97.1381% ( 15) 00:11:31.634 15073.280 - 15132.858: 97.2497% ( 14) 00:11:31.634 15132.858 - 15192.436: 97.3294% ( 10) 00:11:31.634 15192.436 - 15252.015: 97.3932% ( 8) 00:11:31.634 15252.015 - 15371.171: 97.5207% ( 16) 00:11:31.634 15371.171 - 15490.327: 97.6084% ( 11) 00:11:31.634 15490.327 - 15609.484: 97.7041% ( 12) 00:11:31.634 15609.484 - 15728.640: 97.7997% ( 12) 00:11:31.634 15728.640 - 15847.796: 97.8874% ( 11) 00:11:31.634 15847.796 - 15966.953: 97.9273% ( 5) 00:11:31.634 15966.953 - 16086.109: 97.9592% ( 4) 00:11:31.634 22639.709 - 22758.865: 97.9751% ( 2) 00:11:31.634 22758.865 - 22878.022: 97.9831% ( 1) 00:11:31.634 22878.022 - 22997.178: 98.0070% ( 3) 00:11:31.634 22997.178 - 23116.335: 98.0230% ( 2) 00:11:31.634 23116.335 - 23235.491: 98.0469% ( 3) 00:11:31.634 23235.491 - 23354.647: 98.0788% ( 4) 00:11:31.634 23354.647 - 23473.804: 98.1027% ( 3) 00:11:31.634 23473.804 - 23592.960: 98.1266% ( 3) 00:11:31.634 23592.960 - 23712.116: 98.1505% ( 3) 00:11:31.634 23712.116 - 23831.273: 98.1744% ( 3) 00:11:31.634 23831.273 - 23950.429: 98.1983% ( 3) 00:11:31.634 23950.429 - 24069.585: 98.2223% ( 3) 00:11:31.634 24069.585 - 24188.742: 98.2462% ( 3) 00:11:31.634 24188.742 - 24307.898: 98.2701% ( 3) 00:11:31.634 24307.898 - 24427.055: 98.2940% ( 3) 00:11:31.634 24427.055 - 24546.211: 98.3179% ( 3) 00:11:31.634 24546.211 - 24665.367: 98.3418% ( 3) 00:11:31.634 24665.367 - 24784.524: 98.3737% ( 4) 00:11:31.634 24784.524 - 24903.680: 98.3897% ( 2) 00:11:31.634 24903.680 - 25022.836: 98.4136% ( 3) 00:11:31.634 25022.836 - 25141.993: 98.4375% ( 3) 00:11:31.634 25141.993 - 25261.149: 98.4774% ( 5) 00:11:31.634 25261.149 - 25380.305: 98.5252% ( 6) 00:11:31.634 25380.305 - 25499.462: 98.5890% ( 8) 00:11:31.634 25499.462 - 25618.618: 98.6288% ( 5) 00:11:31.634 25618.618 - 25737.775: 98.6687% ( 5) 00:11:31.634 25737.775 - 25856.931: 98.7165% ( 6) 00:11:31.634 25856.931 - 25976.087: 98.7643% ( 6) 00:11:31.634 25976.087 - 26095.244: 98.8122% ( 6) 00:11:31.634 26095.244 - 26214.400: 98.8600% ( 6) 00:11:31.634 26214.400 - 26333.556: 98.8999% ( 5) 00:11:31.634 26333.556 - 26452.713: 98.9477% ( 6) 00:11:31.634 26452.713 - 26571.869: 98.9796% ( 4) 00:11:31.634 26571.869 - 26691.025: 98.9876% ( 1) 00:11:31.634 26691.025 - 26810.182: 99.0274% ( 5) 00:11:31.634 26810.182 - 26929.338: 99.0673% ( 5) 00:11:31.634 26929.338 - 27048.495: 99.1151% ( 6) 00:11:31.634 27048.495 - 27167.651: 99.1470% ( 4) 00:11:31.634 27167.651 - 27286.807: 99.1869% ( 5) 00:11:31.634 27286.807 - 27405.964: 99.2347% ( 6) 00:11:31.634 27405.964 - 27525.120: 99.2825% ( 6) 00:11:31.634 27525.120 - 27644.276: 99.3144% ( 4) 00:11:31.634 27644.276 - 27763.433: 99.3622% ( 6) 00:11:31.634 27763.433 - 27882.589: 99.4021% ( 5) 00:11:31.634 27882.589 - 28001.745: 99.4420% ( 5) 00:11:31.634 28001.745 - 28120.902: 99.4818% ( 5) 00:11:31.634 28120.902 - 28240.058: 99.4898% ( 1) 00:11:31.634 30265.716 - 30384.873: 99.5057% ( 2) 00:11:31.634 30384.873 - 30504.029: 99.5297% ( 3) 00:11:31.634 30504.029 - 30742.342: 99.5775% ( 6) 00:11:31.634 30742.342 - 30980.655: 99.6253% ( 6) 00:11:31.634 30980.655 - 31218.967: 99.6732% ( 6) 00:11:31.634 31218.967 - 31457.280: 99.7210% ( 6) 00:11:31.634 31457.280 - 31695.593: 99.7688% ( 6) 00:11:31.634 31695.593 - 31933.905: 99.8246% ( 7) 00:11:31.634 31933.905 - 32172.218: 99.8724% ( 6) 00:11:31.634 32172.218 - 32410.531: 99.9203% ( 6) 00:11:31.634 32410.531 - 32648.844: 99.9681% ( 6) 00:11:31.634 32648.844 - 32887.156: 100.0000% ( 4) 00:11:31.634 00:11:31.634 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:31.634 ============================================================================== 00:11:31.634 Range in us Cumulative IO count 00:11:31.634 7626.007 - 7685.585: 0.0478% ( 6) 00:11:31.634 7685.585 - 7745.164: 0.1515% ( 13) 00:11:31.634 7745.164 - 7804.742: 0.4225% ( 34) 00:11:31.635 7804.742 - 7864.320: 0.9327% ( 64) 00:11:31.635 7864.320 - 7923.898: 1.6821% ( 94) 00:11:31.635 7923.898 - 7983.476: 2.6068% ( 116) 00:11:31.635 7983.476 - 8043.055: 3.6910% ( 136) 00:11:31.635 8043.055 - 8102.633: 4.9107% ( 153) 00:11:31.635 8102.633 - 8162.211: 6.3855% ( 185) 00:11:31.635 8162.211 - 8221.789: 7.9082% ( 191) 00:11:31.635 8221.789 - 8281.367: 9.5584% ( 207) 00:11:31.635 8281.367 - 8340.945: 11.3122% ( 220) 00:11:31.635 8340.945 - 8400.524: 13.0820% ( 222) 00:11:31.635 8400.524 - 8460.102: 14.9554% ( 235) 00:11:31.635 8460.102 - 8519.680: 16.8447% ( 237) 00:11:31.635 8519.680 - 8579.258: 18.7819% ( 243) 00:11:31.635 8579.258 - 8638.836: 20.6633% ( 236) 00:11:31.635 8638.836 - 8698.415: 22.5287% ( 234) 00:11:31.635 8698.415 - 8757.993: 24.2427% ( 215) 00:11:31.635 8757.993 - 8817.571: 25.7733% ( 192) 00:11:31.635 8817.571 - 8877.149: 27.2959% ( 191) 00:11:31.635 8877.149 - 8936.727: 28.9222% ( 204) 00:11:31.635 8936.727 - 8996.305: 30.7956% ( 235) 00:11:31.635 8996.305 - 9055.884: 32.8524% ( 258) 00:11:31.635 9055.884 - 9115.462: 35.1004% ( 282) 00:11:31.635 9115.462 - 9175.040: 37.4841% ( 299) 00:11:31.635 9175.040 - 9234.618: 40.0989% ( 328) 00:11:31.635 9234.618 - 9294.196: 42.6818% ( 324) 00:11:31.635 9294.196 - 9353.775: 45.4002% ( 341) 00:11:31.635 9353.775 - 9413.353: 48.1186% ( 341) 00:11:31.635 9413.353 - 9472.931: 50.9646% ( 357) 00:11:31.635 9472.931 - 9532.509: 53.7309% ( 347) 00:11:31.635 9532.509 - 9592.087: 56.4094% ( 336) 00:11:31.635 9592.087 - 9651.665: 59.0322% ( 329) 00:11:31.635 9651.665 - 9711.244: 61.6948% ( 334) 00:11:31.635 9711.244 - 9770.822: 64.3096% ( 328) 00:11:31.635 9770.822 - 9830.400: 66.8766% ( 322) 00:11:31.635 9830.400 - 9889.978: 69.2921% ( 303) 00:11:31.635 9889.978 - 9949.556: 71.5163% ( 279) 00:11:31.635 9949.556 - 10009.135: 73.4534% ( 243) 00:11:31.635 10009.135 - 10068.713: 75.0558% ( 201) 00:11:31.635 10068.713 - 10128.291: 76.2835% ( 154) 00:11:31.635 10128.291 - 10187.869: 77.3198% ( 130) 00:11:31.635 10187.869 - 10247.447: 78.1649% ( 106) 00:11:31.635 10247.447 - 10307.025: 78.8345% ( 84) 00:11:31.635 10307.025 - 10366.604: 79.3607% ( 66) 00:11:31.635 10366.604 - 10426.182: 79.7274% ( 46) 00:11:31.635 10426.182 - 10485.760: 80.0143% ( 36) 00:11:31.635 10485.760 - 10545.338: 80.2934% ( 35) 00:11:31.635 10545.338 - 10604.916: 80.5724% ( 35) 00:11:31.635 10604.916 - 10664.495: 80.9391% ( 46) 00:11:31.635 10664.495 - 10724.073: 81.2978% ( 45) 00:11:31.635 10724.073 - 10783.651: 81.6247% ( 41) 00:11:31.635 10783.651 - 10843.229: 81.9117% ( 36) 00:11:31.635 10843.229 - 10902.807: 82.1987% ( 36) 00:11:31.635 10902.807 - 10962.385: 82.4857% ( 36) 00:11:31.635 10962.385 - 11021.964: 82.7408% ( 32) 00:11:31.635 11021.964 - 11081.542: 82.9879% ( 31) 00:11:31.635 11081.542 - 11141.120: 83.2191% ( 29) 00:11:31.635 11141.120 - 11200.698: 83.3705% ( 19) 00:11:31.635 11200.698 - 11260.276: 83.5459% ( 22) 00:11:31.635 11260.276 - 11319.855: 83.7133% ( 21) 00:11:31.635 11319.855 - 11379.433: 83.9047% ( 24) 00:11:31.635 11379.433 - 11439.011: 84.0960% ( 24) 00:11:31.635 11439.011 - 11498.589: 84.2395% ( 18) 00:11:31.635 11498.589 - 11558.167: 84.3351% ( 12) 00:11:31.635 11558.167 - 11617.745: 84.3989% ( 8) 00:11:31.635 11617.745 - 11677.324: 84.4547% ( 7) 00:11:31.635 11677.324 - 11736.902: 84.5584% ( 13) 00:11:31.635 11736.902 - 11796.480: 84.6859% ( 16) 00:11:31.635 11796.480 - 11856.058: 84.8453% ( 20) 00:11:31.635 11856.058 - 11915.636: 84.9968% ( 19) 00:11:31.635 11915.636 - 11975.215: 85.1961% ( 25) 00:11:31.635 11975.215 - 12034.793: 85.3795% ( 23) 00:11:31.635 12034.793 - 12094.371: 85.5469% ( 21) 00:11:31.635 12094.371 - 12153.949: 85.7223% ( 22) 00:11:31.635 12153.949 - 12213.527: 85.9136% ( 24) 00:11:31.635 12213.527 - 12273.105: 86.0969% ( 23) 00:11:31.635 12273.105 - 12332.684: 86.3281% ( 29) 00:11:31.635 12332.684 - 12392.262: 86.5593% ( 29) 00:11:31.635 12392.262 - 12451.840: 86.8304% ( 34) 00:11:31.635 12451.840 - 12511.418: 87.1253% ( 37) 00:11:31.635 12511.418 - 12570.996: 87.4601% ( 42) 00:11:31.635 12570.996 - 12630.575: 87.8268% ( 46) 00:11:31.635 12630.575 - 12690.153: 88.2095% ( 48) 00:11:31.635 12690.153 - 12749.731: 88.5364% ( 41) 00:11:31.635 12749.731 - 12809.309: 88.8871% ( 44) 00:11:31.635 12809.309 - 12868.887: 89.2219% ( 42) 00:11:31.635 12868.887 - 12928.465: 89.5568% ( 42) 00:11:31.635 12928.465 - 12988.044: 89.9314% ( 47) 00:11:31.635 12988.044 - 13047.622: 90.2982% ( 46) 00:11:31.635 13047.622 - 13107.200: 90.6409% ( 43) 00:11:31.635 13107.200 - 13166.778: 90.9837% ( 43) 00:11:31.635 13166.778 - 13226.356: 91.3425% ( 45) 00:11:31.635 13226.356 - 13285.935: 91.7012% ( 45) 00:11:31.635 13285.935 - 13345.513: 92.0281% ( 41) 00:11:31.635 13345.513 - 13405.091: 92.2911% ( 33) 00:11:31.635 13405.091 - 13464.669: 92.5143% ( 28) 00:11:31.635 13464.669 - 13524.247: 92.7296% ( 27) 00:11:31.635 13524.247 - 13583.825: 92.9209% ( 24) 00:11:31.635 13583.825 - 13643.404: 93.0963% ( 22) 00:11:31.635 13643.404 - 13702.982: 93.2717% ( 22) 00:11:31.635 13702.982 - 13762.560: 93.4630% ( 24) 00:11:31.635 13762.560 - 13822.138: 93.6224% ( 20) 00:11:31.635 13822.138 - 13881.716: 93.7739% ( 19) 00:11:31.635 13881.716 - 13941.295: 93.9812% ( 26) 00:11:31.635 13941.295 - 14000.873: 94.1645% ( 23) 00:11:31.635 14000.873 - 14060.451: 94.3718% ( 26) 00:11:31.635 14060.451 - 14120.029: 94.5472% ( 22) 00:11:31.635 14120.029 - 14179.607: 94.7465% ( 25) 00:11:31.635 14179.607 - 14239.185: 94.9298% ( 23) 00:11:31.635 14239.185 - 14298.764: 95.1212% ( 24) 00:11:31.635 14298.764 - 14358.342: 95.2806% ( 20) 00:11:31.635 14358.342 - 14417.920: 95.4321% ( 19) 00:11:31.635 14417.920 - 14477.498: 95.6075% ( 22) 00:11:31.635 14477.498 - 14537.076: 95.7908% ( 23) 00:11:31.635 14537.076 - 14596.655: 95.9821% ( 24) 00:11:31.635 14596.655 - 14656.233: 96.2213% ( 30) 00:11:31.635 14656.233 - 14715.811: 96.4286% ( 26) 00:11:31.635 14715.811 - 14775.389: 96.6199% ( 24) 00:11:31.635 14775.389 - 14834.967: 96.7634% ( 18) 00:11:31.635 14834.967 - 14894.545: 96.9308% ( 21) 00:11:31.635 14894.545 - 14954.124: 97.0663% ( 17) 00:11:31.635 14954.124 - 15013.702: 97.1620% ( 12) 00:11:31.635 15013.702 - 15073.280: 97.2577% ( 12) 00:11:31.635 15073.280 - 15132.858: 97.3453% ( 11) 00:11:31.635 15132.858 - 15192.436: 97.4171% ( 9) 00:11:31.635 15192.436 - 15252.015: 97.4729% ( 7) 00:11:31.635 15252.015 - 15371.171: 97.5925% ( 15) 00:11:31.635 15371.171 - 15490.327: 97.6961% ( 13) 00:11:31.635 15490.327 - 15609.484: 97.7519% ( 7) 00:11:31.635 15609.484 - 15728.640: 97.8077% ( 7) 00:11:31.635 15728.640 - 15847.796: 97.8555% ( 6) 00:11:31.635 15847.796 - 15966.953: 97.9193% ( 8) 00:11:31.635 15966.953 - 16086.109: 97.9592% ( 5) 00:11:31.635 20137.425 - 20256.582: 97.9751% ( 2) 00:11:31.635 20256.582 - 20375.738: 97.9990% ( 3) 00:11:31.635 20375.738 - 20494.895: 98.0230% ( 3) 00:11:31.635 20494.895 - 20614.051: 98.0469% ( 3) 00:11:31.635 20614.051 - 20733.207: 98.0708% ( 3) 00:11:31.635 20733.207 - 20852.364: 98.0867% ( 2) 00:11:31.635 20852.364 - 20971.520: 98.1107% ( 3) 00:11:31.635 20971.520 - 21090.676: 98.1346% ( 3) 00:11:31.635 21090.676 - 21209.833: 98.1585% ( 3) 00:11:31.635 21209.833 - 21328.989: 98.1824% ( 3) 00:11:31.635 21328.989 - 21448.145: 98.2063% ( 3) 00:11:31.635 21448.145 - 21567.302: 98.2302% ( 3) 00:11:31.635 21567.302 - 21686.458: 98.2541% ( 3) 00:11:31.635 21686.458 - 21805.615: 98.2781% ( 3) 00:11:31.635 21805.615 - 21924.771: 98.3020% ( 3) 00:11:31.635 21924.771 - 22043.927: 98.3259% ( 3) 00:11:31.635 22043.927 - 22163.084: 98.3498% ( 3) 00:11:31.635 22163.084 - 22282.240: 98.3817% ( 4) 00:11:31.635 22282.240 - 22401.396: 98.4056% ( 3) 00:11:31.635 22401.396 - 22520.553: 98.4295% ( 3) 00:11:31.635 22520.553 - 22639.709: 98.4455% ( 2) 00:11:31.635 22639.709 - 22758.865: 98.4694% ( 3) 00:11:31.635 25618.618 - 25737.775: 98.5013% ( 4) 00:11:31.635 25737.775 - 25856.931: 98.5491% ( 6) 00:11:31.635 25856.931 - 25976.087: 98.5890% ( 5) 00:11:31.635 25976.087 - 26095.244: 98.6368% ( 6) 00:11:31.635 26095.244 - 26214.400: 98.6846% ( 6) 00:11:31.635 26214.400 - 26333.556: 98.7325% ( 6) 00:11:31.635 26333.556 - 26452.713: 98.7803% ( 6) 00:11:31.635 26452.713 - 26571.869: 98.8122% ( 4) 00:11:31.635 26571.869 - 26691.025: 98.8600% ( 6) 00:11:31.635 26691.025 - 26810.182: 98.9477% ( 11) 00:11:31.635 26810.182 - 26929.338: 99.0274% ( 10) 00:11:31.635 26929.338 - 27048.495: 99.0912% ( 8) 00:11:31.635 27048.495 - 27167.651: 99.1311% ( 5) 00:11:31.635 27167.651 - 27286.807: 99.1629% ( 4) 00:11:31.635 27286.807 - 27405.964: 99.2028% ( 5) 00:11:31.635 27405.964 - 27525.120: 99.2427% ( 5) 00:11:31.635 27525.120 - 27644.276: 99.2905% ( 6) 00:11:31.635 27644.276 - 27763.433: 99.3383% ( 6) 00:11:31.635 27763.433 - 27882.589: 99.4101% ( 9) 00:11:31.635 27882.589 - 28001.745: 99.4659% ( 7) 00:11:31.635 28001.745 - 28120.902: 99.5217% ( 7) 00:11:31.635 28120.902 - 28240.058: 99.5934% ( 9) 00:11:31.635 28240.058 - 28359.215: 99.6173% ( 3) 00:11:31.635 28359.215 - 28478.371: 99.6413% ( 3) 00:11:31.635 28478.371 - 28597.527: 99.6652% ( 3) 00:11:31.635 28597.527 - 28716.684: 99.6891% ( 3) 00:11:31.635 28716.684 - 28835.840: 99.7050% ( 2) 00:11:31.635 28835.840 - 28954.996: 99.7290% ( 3) 00:11:31.635 28954.996 - 29074.153: 99.7529% ( 3) 00:11:31.635 29074.153 - 29193.309: 99.7768% ( 3) 00:11:31.635 29193.309 - 29312.465: 99.8007% ( 3) 00:11:31.635 29312.465 - 29431.622: 99.8246% ( 3) 00:11:31.635 29431.622 - 29550.778: 99.8565% ( 4) 00:11:31.635 29550.778 - 29669.935: 99.8804% ( 3) 00:11:31.635 29669.935 - 29789.091: 99.9043% ( 3) 00:11:31.635 29789.091 - 29908.247: 99.9283% ( 3) 00:11:31.635 29908.247 - 30027.404: 99.9522% ( 3) 00:11:31.635 30027.404 - 30146.560: 99.9761% ( 3) 00:11:31.635 30146.560 - 30265.716: 100.0000% ( 3) 00:11:31.635 00:11:31.635 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:31.635 ============================================================================== 00:11:31.635 Range in us Cumulative IO count 00:11:31.635 7626.007 - 7685.585: 0.0239% ( 3) 00:11:31.635 7685.585 - 7745.164: 0.1196% ( 12) 00:11:31.635 7745.164 - 7804.742: 0.4305% ( 39) 00:11:31.636 7804.742 - 7864.320: 0.9487% ( 65) 00:11:31.636 7864.320 - 7923.898: 1.7698% ( 103) 00:11:31.636 7923.898 - 7983.476: 2.6786% ( 114) 00:11:31.636 7983.476 - 8043.055: 3.7707% ( 137) 00:11:31.636 8043.055 - 8102.633: 4.9904% ( 153) 00:11:31.636 8102.633 - 8162.211: 6.3217% ( 167) 00:11:31.636 8162.211 - 8221.789: 7.8205% ( 188) 00:11:31.636 8221.789 - 8281.367: 9.3989% ( 198) 00:11:31.636 8281.367 - 8340.945: 11.0651% ( 209) 00:11:31.636 8340.945 - 8400.524: 12.8508% ( 224) 00:11:31.636 8400.524 - 8460.102: 14.7879% ( 243) 00:11:31.636 8460.102 - 8519.680: 16.6853% ( 238) 00:11:31.636 8519.680 - 8579.258: 18.5906% ( 239) 00:11:31.636 8579.258 - 8638.836: 20.5995% ( 252) 00:11:31.636 8638.836 - 8698.415: 22.4809% ( 236) 00:11:31.636 8698.415 - 8757.993: 24.1629% ( 211) 00:11:31.636 8757.993 - 8817.571: 25.6617% ( 188) 00:11:31.636 8817.571 - 8877.149: 27.1524% ( 187) 00:11:31.636 8877.149 - 8936.727: 28.8265% ( 210) 00:11:31.636 8936.727 - 8996.305: 30.7637% ( 243) 00:11:31.636 8996.305 - 9055.884: 32.9640% ( 276) 00:11:31.636 9055.884 - 9115.462: 35.2280% ( 284) 00:11:31.636 9115.462 - 9175.040: 37.7152% ( 312) 00:11:31.636 9175.040 - 9234.618: 40.4018% ( 337) 00:11:31.636 9234.618 - 9294.196: 43.1282% ( 342) 00:11:31.636 9294.196 - 9353.775: 45.8546% ( 342) 00:11:31.636 9353.775 - 9413.353: 48.4774% ( 329) 00:11:31.636 9413.353 - 9472.931: 51.1001% ( 329) 00:11:31.636 9472.931 - 9532.509: 53.7946% ( 338) 00:11:31.636 9532.509 - 9592.087: 56.4493% ( 333) 00:11:31.636 9592.087 - 9651.665: 59.1598% ( 340) 00:11:31.636 9651.665 - 9711.244: 61.7347% ( 323) 00:11:31.636 9711.244 - 9770.822: 64.4611% ( 342) 00:11:31.636 9770.822 - 9830.400: 67.0121% ( 320) 00:11:31.636 9830.400 - 9889.978: 69.4914% ( 311) 00:11:31.636 9889.978 - 9949.556: 71.5641% ( 260) 00:11:31.636 9949.556 - 10009.135: 73.5092% ( 244) 00:11:31.636 10009.135 - 10068.713: 75.0558% ( 194) 00:11:31.636 10068.713 - 10128.291: 76.3154% ( 158) 00:11:31.636 10128.291 - 10187.869: 77.1923% ( 110) 00:11:31.636 10187.869 - 10247.447: 77.8779% ( 86) 00:11:31.636 10247.447 - 10307.025: 78.4439% ( 71) 00:11:31.636 10307.025 - 10366.604: 78.9541% ( 64) 00:11:31.636 10366.604 - 10426.182: 79.3686% ( 52) 00:11:31.636 10426.182 - 10485.760: 79.7353% ( 46) 00:11:31.636 10485.760 - 10545.338: 80.0622% ( 41) 00:11:31.636 10545.338 - 10604.916: 80.3890% ( 41) 00:11:31.636 10604.916 - 10664.495: 80.6920% ( 38) 00:11:31.636 10664.495 - 10724.073: 81.0108% ( 40) 00:11:31.636 10724.073 - 10783.651: 81.3377% ( 41) 00:11:31.636 10783.651 - 10843.229: 81.6167% ( 35) 00:11:31.636 10843.229 - 10902.807: 81.8957% ( 35) 00:11:31.636 10902.807 - 10962.385: 82.1668% ( 34) 00:11:31.636 10962.385 - 11021.964: 82.4378% ( 34) 00:11:31.636 11021.964 - 11081.542: 82.7009% ( 33) 00:11:31.636 11081.542 - 11141.120: 82.9480% ( 31) 00:11:31.636 11141.120 - 11200.698: 83.1473% ( 25) 00:11:31.636 11200.698 - 11260.276: 83.3466% ( 25) 00:11:31.636 11260.276 - 11319.855: 83.5220% ( 22) 00:11:31.636 11319.855 - 11379.433: 83.7372% ( 27) 00:11:31.636 11379.433 - 11439.011: 83.9445% ( 26) 00:11:31.636 11439.011 - 11498.589: 84.1598% ( 27) 00:11:31.636 11498.589 - 11558.167: 84.3511% ( 24) 00:11:31.636 11558.167 - 11617.745: 84.5584% ( 26) 00:11:31.636 11617.745 - 11677.324: 84.7577% ( 25) 00:11:31.636 11677.324 - 11736.902: 84.9410% ( 23) 00:11:31.636 11736.902 - 11796.480: 85.1403% ( 25) 00:11:31.636 11796.480 - 11856.058: 85.3396% ( 25) 00:11:31.636 11856.058 - 11915.636: 85.5070% ( 21) 00:11:31.636 11915.636 - 11975.215: 85.6744% ( 21) 00:11:31.636 11975.215 - 12034.793: 85.8578% ( 23) 00:11:31.636 12034.793 - 12094.371: 86.0332% ( 22) 00:11:31.636 12094.371 - 12153.949: 86.2006% ( 21) 00:11:31.636 12153.949 - 12213.527: 86.3760% ( 22) 00:11:31.636 12213.527 - 12273.105: 86.5513% ( 22) 00:11:31.636 12273.105 - 12332.684: 86.7427% ( 24) 00:11:31.636 12332.684 - 12392.262: 86.9659% ( 28) 00:11:31.636 12392.262 - 12451.840: 87.2290% ( 33) 00:11:31.636 12451.840 - 12511.418: 87.4920% ( 33) 00:11:31.636 12511.418 - 12570.996: 87.8029% ( 39) 00:11:31.636 12570.996 - 12630.575: 88.1378% ( 42) 00:11:31.636 12630.575 - 12690.153: 88.4646% ( 41) 00:11:31.636 12690.153 - 12749.731: 88.7994% ( 42) 00:11:31.636 12749.731 - 12809.309: 89.1263% ( 41) 00:11:31.636 12809.309 - 12868.887: 89.4611% ( 42) 00:11:31.636 12868.887 - 12928.465: 89.7879% ( 41) 00:11:31.636 12928.465 - 12988.044: 90.1228% ( 42) 00:11:31.636 12988.044 - 13047.622: 90.4735% ( 44) 00:11:31.636 13047.622 - 13107.200: 90.7844% ( 39) 00:11:31.636 13107.200 - 13166.778: 91.0714% ( 36) 00:11:31.636 13166.778 - 13226.356: 91.3903% ( 40) 00:11:31.636 13226.356 - 13285.935: 91.7251% ( 42) 00:11:31.636 13285.935 - 13345.513: 92.0121% ( 36) 00:11:31.636 13345.513 - 13405.091: 92.2274% ( 27) 00:11:31.636 13405.091 - 13464.669: 92.4267% ( 25) 00:11:31.636 13464.669 - 13524.247: 92.6100% ( 23) 00:11:31.636 13524.247 - 13583.825: 92.7854% ( 22) 00:11:31.636 13583.825 - 13643.404: 92.9688% ( 23) 00:11:31.636 13643.404 - 13702.982: 93.1441% ( 22) 00:11:31.636 13702.982 - 13762.560: 93.3355% ( 24) 00:11:31.636 13762.560 - 13822.138: 93.5029% ( 21) 00:11:31.636 13822.138 - 13881.716: 93.6783% ( 22) 00:11:31.636 13881.716 - 13941.295: 93.8297% ( 19) 00:11:31.636 13941.295 - 14000.873: 93.9971% ( 21) 00:11:31.636 14000.873 - 14060.451: 94.1885% ( 24) 00:11:31.636 14060.451 - 14120.029: 94.3399% ( 19) 00:11:31.636 14120.029 - 14179.607: 94.4994% ( 20) 00:11:31.636 14179.607 - 14239.185: 94.6907% ( 24) 00:11:31.636 14239.185 - 14298.764: 94.8422% ( 19) 00:11:31.636 14298.764 - 14358.342: 95.0096% ( 21) 00:11:31.636 14358.342 - 14417.920: 95.1929% ( 23) 00:11:31.636 14417.920 - 14477.498: 95.3763% ( 23) 00:11:31.636 14477.498 - 14537.076: 95.5357% ( 20) 00:11:31.636 14537.076 - 14596.655: 95.7350% ( 25) 00:11:31.636 14596.655 - 14656.233: 95.9423% ( 26) 00:11:31.636 14656.233 - 14715.811: 96.1575% ( 27) 00:11:31.636 14715.811 - 14775.389: 96.3409% ( 23) 00:11:31.636 14775.389 - 14834.967: 96.5163% ( 22) 00:11:31.636 14834.967 - 14894.545: 96.7076% ( 24) 00:11:31.636 14894.545 - 14954.124: 96.8989% ( 24) 00:11:31.636 14954.124 - 15013.702: 97.0902% ( 24) 00:11:31.636 15013.702 - 15073.280: 97.2736% ( 23) 00:11:31.636 15073.280 - 15132.858: 97.4091% ( 17) 00:11:31.636 15132.858 - 15192.436: 97.5686% ( 20) 00:11:31.636 15192.436 - 15252.015: 97.6483% ( 10) 00:11:31.636 15252.015 - 15371.171: 97.7679% ( 15) 00:11:31.636 15371.171 - 15490.327: 97.8874% ( 15) 00:11:31.636 15490.327 - 15609.484: 97.9592% ( 9) 00:11:31.636 17754.298 - 17873.455: 97.9831% ( 3) 00:11:31.636 17873.455 - 17992.611: 97.9990% ( 2) 00:11:31.636 17992.611 - 18111.767: 98.0309% ( 4) 00:11:31.636 18111.767 - 18230.924: 98.0628% ( 4) 00:11:31.636 18230.924 - 18350.080: 98.0867% ( 3) 00:11:31.636 18350.080 - 18469.236: 98.1107% ( 3) 00:11:31.636 18469.236 - 18588.393: 98.1346% ( 3) 00:11:31.636 18588.393 - 18707.549: 98.1505% ( 2) 00:11:31.636 18707.549 - 18826.705: 98.1744% ( 3) 00:11:31.636 18826.705 - 18945.862: 98.1983% ( 3) 00:11:31.636 18945.862 - 19065.018: 98.2223% ( 3) 00:11:31.636 19065.018 - 19184.175: 98.2462% ( 3) 00:11:31.636 19184.175 - 19303.331: 98.2701% ( 3) 00:11:31.636 19303.331 - 19422.487: 98.2940% ( 3) 00:11:31.636 19422.487 - 19541.644: 98.3179% ( 3) 00:11:31.636 19541.644 - 19660.800: 98.3418% ( 3) 00:11:31.636 19660.800 - 19779.956: 98.3658% ( 3) 00:11:31.636 19779.956 - 19899.113: 98.3897% ( 3) 00:11:31.636 19899.113 - 20018.269: 98.4216% ( 4) 00:11:31.636 20018.269 - 20137.425: 98.4455% ( 3) 00:11:31.636 20137.425 - 20256.582: 98.4694% ( 3) 00:11:31.636 25261.149 - 25380.305: 98.4933% ( 3) 00:11:31.636 25380.305 - 25499.462: 98.5172% ( 3) 00:11:31.636 25499.462 - 25618.618: 98.5411% ( 3) 00:11:31.636 25618.618 - 25737.775: 98.5651% ( 3) 00:11:31.636 25737.775 - 25856.931: 98.5890% ( 3) 00:11:31.636 25856.931 - 25976.087: 98.6129% ( 3) 00:11:31.636 25976.087 - 26095.244: 98.6846% ( 9) 00:11:31.636 26095.244 - 26214.400: 98.7564% ( 9) 00:11:31.636 26214.400 - 26333.556: 98.8361% ( 10) 00:11:31.636 26333.556 - 26452.713: 98.8999% ( 8) 00:11:31.636 26452.713 - 26571.869: 98.9716% ( 9) 00:11:31.636 26571.869 - 26691.025: 99.0513% ( 10) 00:11:31.636 26691.025 - 26810.182: 99.1151% ( 8) 00:11:31.636 26810.182 - 26929.338: 99.2028% ( 11) 00:11:31.636 26929.338 - 27048.495: 99.3304% ( 16) 00:11:31.636 27048.495 - 27167.651: 99.4420% ( 14) 00:11:31.636 27167.651 - 27286.807: 99.5695% ( 16) 00:11:31.636 27286.807 - 27405.964: 99.6732% ( 13) 00:11:31.636 27405.964 - 27525.120: 99.7529% ( 10) 00:11:31.636 27525.120 - 27644.276: 99.8166% ( 8) 00:11:31.636 27644.276 - 27763.433: 99.8724% ( 7) 00:11:31.636 27763.433 - 27882.589: 99.9203% ( 6) 00:11:31.636 27882.589 - 28001.745: 99.9681% ( 6) 00:11:31.636 28001.745 - 28120.902: 100.0000% ( 4) 00:11:31.636 00:11:31.636 09:17:17 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:33.014 Initializing NVMe Controllers 00:11:33.014 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.014 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:33.014 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:33.014 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:33.014 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:33.014 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:33.014 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:33.014 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:33.014 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:33.014 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:33.014 Initialization complete. Launching workers. 00:11:33.014 ======================================================== 00:11:33.014 Latency(us) 00:11:33.014 Device Information : IOPS MiB/s Average min max 00:11:33.014 PCIE (0000:00:10.0) NSID 1 from core 0: 11025.06 129.20 11633.76 8462.18 44711.24 00:11:33.014 PCIE (0000:00:11.0) NSID 1 from core 0: 11025.06 129.20 11607.49 8591.21 42232.60 00:11:33.014 PCIE (0000:00:13.0) NSID 1 from core 0: 11025.06 129.20 11581.14 8513.29 40244.68 00:11:33.014 PCIE (0000:00:12.0) NSID 1 from core 0: 11025.06 129.20 11554.81 8568.84 37670.39 00:11:33.014 PCIE (0000:00:12.0) NSID 2 from core 0: 11088.78 129.95 11462.23 8615.83 29396.77 00:11:33.014 PCIE (0000:00:12.0) NSID 3 from core 0: 11088.78 129.95 11435.87 8566.32 27002.47 00:11:33.014 ======================================================== 00:11:33.014 Total : 66277.79 776.69 11545.70 8462.18 44711.24 00:11:33.014 00:11:33.014 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:33.014 ================================================================================= 00:11:33.014 1.00000% : 8757.993us 00:11:33.014 10.00000% : 9711.244us 00:11:33.014 25.00000% : 10426.182us 00:11:33.014 50.00000% : 11141.120us 00:11:33.014 75.00000% : 12034.793us 00:11:33.014 90.00000% : 13643.404us 00:11:33.014 95.00000% : 14477.498us 00:11:33.014 98.00000% : 16086.109us 00:11:33.014 99.00000% : 35031.971us 00:11:33.014 99.50000% : 42657.978us 00:11:33.014 99.90000% : 44326.167us 00:11:33.014 99.99000% : 44802.793us 00:11:33.014 99.99900% : 44802.793us 00:11:33.014 99.99990% : 44802.793us 00:11:33.014 99.99999% : 44802.793us 00:11:33.014 00:11:33.014 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:33.014 ================================================================================= 00:11:33.014 1.00000% : 8877.149us 00:11:33.014 10.00000% : 9770.822us 00:11:33.014 25.00000% : 10426.182us 00:11:33.014 50.00000% : 11141.120us 00:11:33.014 75.00000% : 11975.215us 00:11:33.014 90.00000% : 13702.982us 00:11:33.014 95.00000% : 14417.920us 00:11:33.014 98.00000% : 15847.796us 00:11:33.014 99.00000% : 32648.844us 00:11:33.014 99.50000% : 40036.538us 00:11:33.014 99.90000% : 41943.040us 00:11:33.014 99.99000% : 42419.665us 00:11:33.014 99.99900% : 42419.665us 00:11:33.014 99.99990% : 42419.665us 00:11:33.014 99.99999% : 42419.665us 00:11:33.014 00:11:33.014 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:33.014 ================================================================================= 00:11:33.014 1.00000% : 8936.727us 00:11:33.014 10.00000% : 9770.822us 00:11:33.014 25.00000% : 10426.182us 00:11:33.014 50.00000% : 11141.120us 00:11:33.014 75.00000% : 11975.215us 00:11:33.014 90.00000% : 13583.825us 00:11:33.014 95.00000% : 14358.342us 00:11:33.014 98.00000% : 15966.953us 00:11:33.014 99.00000% : 30980.655us 00:11:33.014 99.50000% : 38368.349us 00:11:33.014 99.90000% : 40036.538us 00:11:33.014 99.99000% : 40274.851us 00:11:33.014 99.99900% : 40274.851us 00:11:33.014 99.99990% : 40274.851us 00:11:33.014 99.99999% : 40274.851us 00:11:33.014 00:11:33.014 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:33.014 ================================================================================= 00:11:33.014 1.00000% : 8936.727us 00:11:33.014 10.00000% : 9770.822us 00:11:33.014 25.00000% : 10426.182us 00:11:33.014 50.00000% : 11081.542us 00:11:33.014 75.00000% : 11915.636us 00:11:33.014 90.00000% : 13643.404us 00:11:33.014 95.00000% : 14477.498us 00:11:33.014 98.00000% : 15966.953us 00:11:33.014 99.00000% : 28478.371us 00:11:33.014 99.50000% : 35746.909us 00:11:33.014 99.90000% : 37415.098us 00:11:33.014 99.99000% : 37653.411us 00:11:33.014 99.99900% : 37891.724us 00:11:33.014 99.99990% : 37891.724us 00:11:33.014 99.99999% : 37891.724us 00:11:33.014 00:11:33.014 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:33.014 ================================================================================= 00:11:33.014 1.00000% : 8936.727us 00:11:33.014 10.00000% : 9770.822us 00:11:33.014 25.00000% : 10426.182us 00:11:33.014 50.00000% : 11141.120us 00:11:33.014 75.00000% : 11915.636us 00:11:33.014 90.00000% : 13583.825us 00:11:33.014 95.00000% : 14417.920us 00:11:33.014 98.00000% : 15966.953us 00:11:33.014 99.00000% : 19899.113us 00:11:33.014 99.50000% : 27286.807us 00:11:33.014 99.90000% : 29074.153us 00:11:33.014 99.99000% : 29431.622us 00:11:33.014 99.99900% : 29431.622us 00:11:33.014 99.99990% : 29431.622us 00:11:33.014 99.99999% : 29431.622us 00:11:33.015 00:11:33.015 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:33.015 ================================================================================= 00:11:33.015 1.00000% : 8936.727us 00:11:33.015 10.00000% : 9770.822us 00:11:33.015 25.00000% : 10426.182us 00:11:33.015 50.00000% : 11141.120us 00:11:33.015 75.00000% : 11915.636us 00:11:33.015 90.00000% : 13643.404us 00:11:33.015 95.00000% : 14477.498us 00:11:33.015 98.00000% : 16086.109us 00:11:33.015 99.00000% : 17396.829us 00:11:33.015 99.50000% : 24903.680us 00:11:33.015 99.90000% : 26571.869us 00:11:33.015 99.99000% : 27048.495us 00:11:33.015 99.99900% : 27048.495us 00:11:33.015 99.99990% : 27048.495us 00:11:33.015 99.99999% : 27048.495us 00:11:33.015 00:11:33.015 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:33.015 ============================================================================== 00:11:33.015 Range in us Cumulative IO count 00:11:33.015 8460.102 - 8519.680: 0.1264% ( 14) 00:11:33.015 8519.680 - 8579.258: 0.2529% ( 14) 00:11:33.015 8579.258 - 8638.836: 0.4245% ( 19) 00:11:33.015 8638.836 - 8698.415: 0.7406% ( 35) 00:11:33.015 8698.415 - 8757.993: 1.0116% ( 30) 00:11:33.015 8757.993 - 8817.571: 1.2464% ( 26) 00:11:33.015 8817.571 - 8877.149: 1.5354% ( 32) 00:11:33.015 8877.149 - 8936.727: 1.9147% ( 42) 00:11:33.015 8936.727 - 8996.305: 2.1767% ( 29) 00:11:33.015 8996.305 - 9055.884: 2.4837% ( 34) 00:11:33.015 9055.884 - 9115.462: 2.8450% ( 40) 00:11:33.015 9115.462 - 9175.040: 3.2243% ( 42) 00:11:33.015 9175.040 - 9234.618: 3.6308% ( 45) 00:11:33.015 9234.618 - 9294.196: 4.0553% ( 47) 00:11:33.015 9294.196 - 9353.775: 4.4888% ( 48) 00:11:33.015 9353.775 - 9413.353: 5.2475% ( 84) 00:11:33.015 9413.353 - 9472.931: 6.0965% ( 94) 00:11:33.015 9472.931 - 9532.509: 7.1983% ( 122) 00:11:33.015 9532.509 - 9592.087: 8.3273% ( 125) 00:11:33.015 9592.087 - 9651.665: 9.3027% ( 108) 00:11:33.015 9651.665 - 9711.244: 10.4498% ( 127) 00:11:33.015 9711.244 - 9770.822: 11.4523% ( 111) 00:11:33.015 9770.822 - 9830.400: 12.5361% ( 120) 00:11:33.015 9830.400 - 9889.978: 13.6561% ( 124) 00:11:33.015 9889.978 - 9949.556: 14.7218% ( 118) 00:11:33.015 9949.556 - 10009.135: 15.9501% ( 136) 00:11:33.015 10009.135 - 10068.713: 17.3862% ( 159) 00:11:33.015 10068.713 - 10128.291: 18.7319% ( 149) 00:11:33.015 10128.291 - 10187.869: 19.9603% ( 136) 00:11:33.015 10187.869 - 10247.447: 21.4505% ( 165) 00:11:33.015 10247.447 - 10307.025: 23.2298% ( 197) 00:11:33.015 10307.025 - 10366.604: 24.8736% ( 182) 00:11:33.015 10366.604 - 10426.182: 26.9238% ( 227) 00:11:33.015 10426.182 - 10485.760: 28.8656% ( 215) 00:11:33.015 10485.760 - 10545.338: 31.1326% ( 251) 00:11:33.015 10545.338 - 10604.916: 33.4176% ( 253) 00:11:33.015 10604.916 - 10664.495: 35.6214% ( 244) 00:11:33.015 10664.495 - 10724.073: 37.8613% ( 248) 00:11:33.015 10724.073 - 10783.651: 39.8663% ( 222) 00:11:33.015 10783.651 - 10843.229: 42.0430% ( 241) 00:11:33.015 10843.229 - 10902.807: 44.0390% ( 221) 00:11:33.015 10902.807 - 10962.385: 45.9267% ( 209) 00:11:33.015 10962.385 - 11021.964: 47.8866% ( 217) 00:11:33.015 11021.964 - 11081.542: 49.7561% ( 207) 00:11:33.015 11081.542 - 11141.120: 51.6528% ( 210) 00:11:33.015 11141.120 - 11200.698: 53.4772% ( 202) 00:11:33.015 11200.698 - 11260.276: 55.3829% ( 211) 00:11:33.015 11260.276 - 11319.855: 57.2977% ( 212) 00:11:33.015 11319.855 - 11379.433: 59.1582% ( 206) 00:11:33.015 11379.433 - 11439.011: 61.0098% ( 205) 00:11:33.015 11439.011 - 11498.589: 62.6535% ( 182) 00:11:33.015 11498.589 - 11558.167: 64.3064% ( 183) 00:11:33.015 11558.167 - 11617.745: 65.8869% ( 175) 00:11:33.015 11617.745 - 11677.324: 67.4314% ( 171) 00:11:33.015 11677.324 - 11736.902: 68.9487% ( 168) 00:11:33.015 11736.902 - 11796.480: 70.5744% ( 180) 00:11:33.015 11796.480 - 11856.058: 72.1550% ( 175) 00:11:33.015 11856.058 - 11915.636: 73.6272% ( 163) 00:11:33.015 11915.636 - 11975.215: 74.9639% ( 148) 00:11:33.015 11975.215 - 12034.793: 76.1380% ( 130) 00:11:33.015 12034.793 - 12094.371: 77.2218% ( 120) 00:11:33.015 12094.371 - 12153.949: 78.2424% ( 113) 00:11:33.015 12153.949 - 12213.527: 79.2088% ( 107) 00:11:33.015 12213.527 - 12273.105: 80.0036% ( 88) 00:11:33.015 12273.105 - 12332.684: 80.6539% ( 72) 00:11:33.015 12332.684 - 12392.262: 81.3764% ( 80) 00:11:33.015 12392.262 - 12451.840: 82.0358% ( 73) 00:11:33.015 12451.840 - 12511.418: 82.5867% ( 61) 00:11:33.015 12511.418 - 12570.996: 83.0202% ( 48) 00:11:33.015 12570.996 - 12630.575: 83.4899% ( 52) 00:11:33.015 12630.575 - 12690.153: 83.9595% ( 52) 00:11:33.015 12690.153 - 12749.731: 84.3840% ( 47) 00:11:33.015 12749.731 - 12809.309: 84.8356% ( 50) 00:11:33.015 12809.309 - 12868.887: 85.3053% ( 52) 00:11:33.015 12868.887 - 12928.465: 85.7659% ( 51) 00:11:33.015 12928.465 - 12988.044: 86.0639% ( 33) 00:11:33.015 12988.044 - 13047.622: 86.4704% ( 45) 00:11:33.015 13047.622 - 13107.200: 86.8407% ( 41) 00:11:33.015 13107.200 - 13166.778: 87.1749% ( 37) 00:11:33.015 13166.778 - 13226.356: 87.5181% ( 38) 00:11:33.015 13226.356 - 13285.935: 87.9155% ( 44) 00:11:33.015 13285.935 - 13345.513: 88.2225% ( 34) 00:11:33.015 13345.513 - 13405.091: 88.6741% ( 50) 00:11:33.015 13405.091 - 13464.669: 89.0354% ( 40) 00:11:33.015 13464.669 - 13524.247: 89.3696% ( 37) 00:11:33.015 13524.247 - 13583.825: 89.8121% ( 49) 00:11:33.015 13583.825 - 13643.404: 90.2457% ( 48) 00:11:33.015 13643.404 - 13702.982: 90.7334% ( 54) 00:11:33.015 13702.982 - 13762.560: 91.1217% ( 43) 00:11:33.015 13762.560 - 13822.138: 91.4740% ( 39) 00:11:33.015 13822.138 - 13881.716: 91.7991% ( 36) 00:11:33.015 13881.716 - 13941.295: 92.1152% ( 35) 00:11:33.015 13941.295 - 14000.873: 92.5036% ( 43) 00:11:33.015 14000.873 - 14060.451: 92.8468% ( 38) 00:11:33.015 14060.451 - 14120.029: 93.1991% ( 39) 00:11:33.015 14120.029 - 14179.607: 93.5061% ( 34) 00:11:33.015 14179.607 - 14239.185: 93.8313% ( 36) 00:11:33.015 14239.185 - 14298.764: 94.2016% ( 41) 00:11:33.015 14298.764 - 14358.342: 94.5629% ( 40) 00:11:33.015 14358.342 - 14417.920: 94.8248% ( 29) 00:11:33.015 14417.920 - 14477.498: 95.1138% ( 32) 00:11:33.015 14477.498 - 14537.076: 95.3667% ( 28) 00:11:33.015 14537.076 - 14596.655: 95.6467% ( 31) 00:11:33.015 14596.655 - 14656.233: 95.8996% ( 28) 00:11:33.015 14656.233 - 14715.811: 96.1976% ( 33) 00:11:33.015 14715.811 - 14775.389: 96.4234% ( 25) 00:11:33.015 14775.389 - 14834.967: 96.6673% ( 27) 00:11:33.015 14834.967 - 14894.545: 96.7937% ( 14) 00:11:33.015 14894.545 - 14954.124: 96.9563% ( 18) 00:11:33.015 14954.124 - 15013.702: 97.1008% ( 16) 00:11:33.015 15013.702 - 15073.280: 97.2001% ( 11) 00:11:33.015 15073.280 - 15132.858: 97.2634% ( 7) 00:11:33.015 15132.858 - 15192.436: 97.3356% ( 8) 00:11:33.015 15192.436 - 15252.015: 97.3537% ( 2) 00:11:33.015 15252.015 - 15371.171: 97.4440% ( 10) 00:11:33.015 15371.171 - 15490.327: 97.5343% ( 10) 00:11:33.015 15490.327 - 15609.484: 97.7240% ( 21) 00:11:33.015 15609.484 - 15728.640: 97.8414% ( 13) 00:11:33.015 15728.640 - 15847.796: 97.8956% ( 6) 00:11:33.015 15847.796 - 15966.953: 97.9678% ( 8) 00:11:33.015 15966.953 - 16086.109: 98.0853% ( 13) 00:11:33.015 16086.109 - 16205.265: 98.1304% ( 5) 00:11:33.015 16205.265 - 16324.422: 98.2117% ( 9) 00:11:33.015 16324.422 - 16443.578: 98.2569% ( 5) 00:11:33.015 16443.578 - 16562.735: 98.3562% ( 11) 00:11:33.015 16562.735 - 16681.891: 98.4104% ( 6) 00:11:33.015 16681.891 - 16801.047: 98.4917% ( 9) 00:11:33.015 16801.047 - 16920.204: 98.5730% ( 9) 00:11:33.015 16920.204 - 17039.360: 98.6452% ( 8) 00:11:33.015 17039.360 - 17158.516: 98.7085% ( 7) 00:11:33.015 17158.516 - 17277.673: 98.7446% ( 4) 00:11:33.015 17277.673 - 17396.829: 98.7807% ( 4) 00:11:33.015 17396.829 - 17515.985: 98.8259% ( 5) 00:11:33.015 17515.985 - 17635.142: 98.8439% ( 2) 00:11:33.015 34078.720 - 34317.033: 98.8801% ( 4) 00:11:33.015 34317.033 - 34555.345: 98.9342% ( 6) 00:11:33.015 34555.345 - 34793.658: 98.9884% ( 6) 00:11:33.015 34793.658 - 35031.971: 99.0336% ( 5) 00:11:33.015 35031.971 - 35270.284: 99.0878% ( 6) 00:11:33.015 35270.284 - 35508.596: 99.1329% ( 5) 00:11:33.015 35508.596 - 35746.909: 99.1871% ( 6) 00:11:33.015 35746.909 - 35985.222: 99.2413% ( 6) 00:11:33.015 35985.222 - 36223.535: 99.2955% ( 6) 00:11:33.015 36223.535 - 36461.847: 99.3497% ( 6) 00:11:33.015 36461.847 - 36700.160: 99.4039% ( 6) 00:11:33.015 36700.160 - 36938.473: 99.4220% ( 2) 00:11:33.015 41943.040 - 42181.353: 99.4491% ( 3) 00:11:33.015 42181.353 - 42419.665: 99.4942% ( 5) 00:11:33.015 42419.665 - 42657.978: 99.5394% ( 5) 00:11:33.015 42657.978 - 42896.291: 99.5936% ( 6) 00:11:33.015 42896.291 - 43134.604: 99.6568% ( 7) 00:11:33.015 43134.604 - 43372.916: 99.7020% ( 5) 00:11:33.015 43372.916 - 43611.229: 99.7561% ( 6) 00:11:33.015 43611.229 - 43849.542: 99.8013% ( 5) 00:11:33.015 43849.542 - 44087.855: 99.8555% ( 6) 00:11:33.015 44087.855 - 44326.167: 99.9097% ( 6) 00:11:33.015 44326.167 - 44564.480: 99.9639% ( 6) 00:11:33.015 44564.480 - 44802.793: 100.0000% ( 4) 00:11:33.015 00:11:33.015 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:33.015 ============================================================================== 00:11:33.015 Range in us Cumulative IO count 00:11:33.015 8579.258 - 8638.836: 0.0181% ( 2) 00:11:33.015 8638.836 - 8698.415: 0.1535% ( 15) 00:11:33.015 8698.415 - 8757.993: 0.3974% ( 27) 00:11:33.015 8757.993 - 8817.571: 0.6864% ( 32) 00:11:33.015 8817.571 - 8877.149: 1.1019% ( 46) 00:11:33.015 8877.149 - 8936.727: 1.4902% ( 43) 00:11:33.015 8936.727 - 8996.305: 1.9418% ( 50) 00:11:33.015 8996.305 - 9055.884: 2.4296% ( 54) 00:11:33.015 9055.884 - 9115.462: 2.9444% ( 57) 00:11:33.015 9115.462 - 9175.040: 3.4140% ( 52) 00:11:33.015 9175.040 - 9234.618: 3.8295% ( 46) 00:11:33.015 9234.618 - 9294.196: 4.1998% ( 41) 00:11:33.015 9294.196 - 9353.775: 4.5791% ( 42) 00:11:33.015 9353.775 - 9413.353: 4.9946% ( 46) 00:11:33.015 9413.353 - 9472.931: 5.3288% ( 37) 00:11:33.015 9472.931 - 9532.509: 5.9158% ( 65) 00:11:33.015 9532.509 - 9592.087: 6.8371% ( 102) 00:11:33.015 9592.087 - 9651.665: 7.8215% ( 109) 00:11:33.015 9651.665 - 9711.244: 9.0860% ( 140) 00:11:33.016 9711.244 - 9770.822: 10.3595% ( 141) 00:11:33.016 9770.822 - 9830.400: 11.6239% ( 140) 00:11:33.016 9830.400 - 9889.978: 12.9606% ( 148) 00:11:33.016 9889.978 - 9949.556: 14.3786% ( 157) 00:11:33.016 9949.556 - 10009.135: 15.5708% ( 132) 00:11:33.016 10009.135 - 10068.713: 16.6095% ( 115) 00:11:33.016 10068.713 - 10128.291: 17.7655% ( 128) 00:11:33.016 10128.291 - 10187.869: 19.0480% ( 142) 00:11:33.016 10187.869 - 10247.447: 20.3757% ( 147) 00:11:33.016 10247.447 - 10307.025: 21.7937% ( 157) 00:11:33.016 10307.025 - 10366.604: 23.5368% ( 193) 00:11:33.016 10366.604 - 10426.182: 25.3522% ( 201) 00:11:33.016 10426.182 - 10485.760: 27.4296% ( 230) 00:11:33.016 10485.760 - 10545.338: 29.5159% ( 231) 00:11:33.016 10545.338 - 10604.916: 31.6384% ( 235) 00:11:33.016 10604.916 - 10664.495: 33.6615% ( 224) 00:11:33.016 10664.495 - 10724.073: 35.7027% ( 226) 00:11:33.016 10724.073 - 10783.651: 38.2045% ( 277) 00:11:33.016 10783.651 - 10843.229: 40.6431% ( 270) 00:11:33.016 10843.229 - 10902.807: 42.9371% ( 254) 00:11:33.016 10902.807 - 10962.385: 45.1951% ( 250) 00:11:33.016 10962.385 - 11021.964: 47.5434% ( 260) 00:11:33.016 11021.964 - 11081.542: 49.7923% ( 249) 00:11:33.016 11081.542 - 11141.120: 51.9870% ( 243) 00:11:33.016 11141.120 - 11200.698: 53.9740% ( 220) 00:11:33.016 11200.698 - 11260.276: 55.9068% ( 214) 00:11:33.016 11260.276 - 11319.855: 57.9118% ( 222) 00:11:33.016 11319.855 - 11379.433: 59.9440% ( 225) 00:11:33.016 11379.433 - 11439.011: 61.8046% ( 206) 00:11:33.016 11439.011 - 11498.589: 63.6561% ( 205) 00:11:33.016 11498.589 - 11558.167: 65.4986% ( 204) 00:11:33.016 11558.167 - 11617.745: 67.1785% ( 186) 00:11:33.016 11617.745 - 11677.324: 68.9487% ( 196) 00:11:33.016 11677.324 - 11736.902: 70.5564% ( 178) 00:11:33.016 11736.902 - 11796.480: 72.1189% ( 173) 00:11:33.016 11796.480 - 11856.058: 73.5459% ( 158) 00:11:33.016 11856.058 - 11915.636: 74.8736% ( 147) 00:11:33.016 11915.636 - 11975.215: 76.1109% ( 137) 00:11:33.016 11975.215 - 12034.793: 77.1586% ( 116) 00:11:33.016 12034.793 - 12094.371: 78.4050% ( 138) 00:11:33.016 12094.371 - 12153.949: 79.3262% ( 102) 00:11:33.016 12153.949 - 12213.527: 80.0939% ( 85) 00:11:33.016 12213.527 - 12273.105: 80.8345% ( 82) 00:11:33.016 12273.105 - 12332.684: 81.4397% ( 67) 00:11:33.016 12332.684 - 12392.262: 81.9454% ( 56) 00:11:33.016 12392.262 - 12451.840: 82.4603% ( 57) 00:11:33.016 12451.840 - 12511.418: 82.8938% ( 48) 00:11:33.016 12511.418 - 12570.996: 83.3725% ( 53) 00:11:33.016 12570.996 - 12630.575: 83.8963% ( 58) 00:11:33.016 12630.575 - 12690.153: 84.2757% ( 42) 00:11:33.016 12690.153 - 12749.731: 84.5556% ( 31) 00:11:33.016 12749.731 - 12809.309: 84.8447% ( 32) 00:11:33.016 12809.309 - 12868.887: 85.1969% ( 39) 00:11:33.016 12868.887 - 12928.465: 85.5762% ( 42) 00:11:33.016 12928.465 - 12988.044: 85.9827% ( 45) 00:11:33.016 12988.044 - 13047.622: 86.4252% ( 49) 00:11:33.016 13047.622 - 13107.200: 86.8587% ( 48) 00:11:33.016 13107.200 - 13166.778: 87.1568% ( 33) 00:11:33.016 13166.778 - 13226.356: 87.4548% ( 33) 00:11:33.016 13226.356 - 13285.935: 87.7348% ( 31) 00:11:33.016 13285.935 - 13345.513: 88.0780% ( 38) 00:11:33.016 13345.513 - 13405.091: 88.4122% ( 37) 00:11:33.016 13405.091 - 13464.669: 88.7735% ( 40) 00:11:33.016 13464.669 - 13524.247: 89.0444% ( 30) 00:11:33.016 13524.247 - 13583.825: 89.3064% ( 29) 00:11:33.016 13583.825 - 13643.404: 89.6767% ( 41) 00:11:33.016 13643.404 - 13702.982: 90.1734% ( 55) 00:11:33.016 13702.982 - 13762.560: 90.6882% ( 57) 00:11:33.016 13762.560 - 13822.138: 91.2211% ( 59) 00:11:33.016 13822.138 - 13881.716: 91.6546% ( 48) 00:11:33.016 13881.716 - 13941.295: 92.0069% ( 39) 00:11:33.016 13941.295 - 14000.873: 92.4585% ( 50) 00:11:33.016 14000.873 - 14060.451: 92.8559% ( 44) 00:11:33.016 14060.451 - 14120.029: 93.1539% ( 33) 00:11:33.016 14120.029 - 14179.607: 93.4790% ( 36) 00:11:33.016 14179.607 - 14239.185: 93.8764% ( 44) 00:11:33.016 14239.185 - 14298.764: 94.3190% ( 49) 00:11:33.016 14298.764 - 14358.342: 94.7254% ( 45) 00:11:33.016 14358.342 - 14417.920: 95.1048% ( 42) 00:11:33.016 14417.920 - 14477.498: 95.4931% ( 43) 00:11:33.016 14477.498 - 14537.076: 95.8183% ( 36) 00:11:33.016 14537.076 - 14596.655: 96.1344% ( 35) 00:11:33.016 14596.655 - 14656.233: 96.4053% ( 30) 00:11:33.016 14656.233 - 14715.811: 96.6492% ( 27) 00:11:33.016 14715.811 - 14775.389: 96.8660% ( 24) 00:11:33.016 14775.389 - 14834.967: 97.0195% ( 17) 00:11:33.016 14834.967 - 14894.545: 97.1098% ( 10) 00:11:33.016 14894.545 - 14954.124: 97.1911% ( 9) 00:11:33.016 14954.124 - 15013.702: 97.2634% ( 8) 00:11:33.016 15013.702 - 15073.280: 97.3356% ( 8) 00:11:33.016 15073.280 - 15132.858: 97.3808% ( 5) 00:11:33.016 15132.858 - 15192.436: 97.4440% ( 7) 00:11:33.016 15192.436 - 15252.015: 97.4982% ( 6) 00:11:33.016 15252.015 - 15371.171: 97.5524% ( 6) 00:11:33.016 15371.171 - 15490.327: 97.6427% ( 10) 00:11:33.016 15490.327 - 15609.484: 97.7511% ( 12) 00:11:33.016 15609.484 - 15728.640: 97.9769% ( 25) 00:11:33.016 15728.640 - 15847.796: 98.1304% ( 17) 00:11:33.016 15847.796 - 15966.953: 98.2749% ( 16) 00:11:33.016 15966.953 - 16086.109: 98.3562% ( 9) 00:11:33.016 16086.109 - 16205.265: 98.4285% ( 8) 00:11:33.016 16205.265 - 16324.422: 98.5098% ( 9) 00:11:33.016 16324.422 - 16443.578: 98.5910% ( 9) 00:11:33.016 16443.578 - 16562.735: 98.6814% ( 10) 00:11:33.016 16562.735 - 16681.891: 98.7355% ( 6) 00:11:33.016 16681.891 - 16801.047: 98.7807% ( 5) 00:11:33.016 16801.047 - 16920.204: 98.8259% ( 5) 00:11:33.016 16920.204 - 17039.360: 98.8439% ( 2) 00:11:33.016 31695.593 - 31933.905: 98.8710% ( 3) 00:11:33.016 31933.905 - 32172.218: 98.9252% ( 6) 00:11:33.016 32172.218 - 32410.531: 98.9794% ( 6) 00:11:33.016 32410.531 - 32648.844: 99.0426% ( 7) 00:11:33.016 32648.844 - 32887.156: 99.0968% ( 6) 00:11:33.016 32887.156 - 33125.469: 99.1510% ( 6) 00:11:33.016 33125.469 - 33363.782: 99.2052% ( 6) 00:11:33.016 33363.782 - 33602.095: 99.2594% ( 6) 00:11:33.016 33602.095 - 33840.407: 99.3136% ( 6) 00:11:33.016 33840.407 - 34078.720: 99.3678% ( 6) 00:11:33.016 34078.720 - 34317.033: 99.4220% ( 6) 00:11:33.016 39559.913 - 39798.225: 99.4491% ( 3) 00:11:33.016 39798.225 - 40036.538: 99.5033% ( 6) 00:11:33.016 40036.538 - 40274.851: 99.5394% ( 4) 00:11:33.016 40274.851 - 40513.164: 99.5845% ( 5) 00:11:33.016 40513.164 - 40751.476: 99.6387% ( 6) 00:11:33.016 40751.476 - 40989.789: 99.6929% ( 6) 00:11:33.016 40989.789 - 41228.102: 99.7561% ( 7) 00:11:33.016 41228.102 - 41466.415: 99.8103% ( 6) 00:11:33.016 41466.415 - 41704.727: 99.8736% ( 7) 00:11:33.016 41704.727 - 41943.040: 99.9277% ( 6) 00:11:33.016 41943.040 - 42181.353: 99.9819% ( 6) 00:11:33.016 42181.353 - 42419.665: 100.0000% ( 2) 00:11:33.016 00:11:33.016 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:33.016 ============================================================================== 00:11:33.016 Range in us Cumulative IO count 00:11:33.016 8460.102 - 8519.680: 0.0090% ( 1) 00:11:33.016 8519.680 - 8579.258: 0.0723% ( 7) 00:11:33.016 8579.258 - 8638.836: 0.1806% ( 12) 00:11:33.016 8638.836 - 8698.415: 0.3161% ( 15) 00:11:33.016 8698.415 - 8757.993: 0.5058% ( 21) 00:11:33.016 8757.993 - 8817.571: 0.7225% ( 24) 00:11:33.016 8817.571 - 8877.149: 0.9483% ( 25) 00:11:33.016 8877.149 - 8936.727: 1.3006% ( 39) 00:11:33.016 8936.727 - 8996.305: 1.7973% ( 55) 00:11:33.016 8996.305 - 9055.884: 2.3392% ( 60) 00:11:33.016 9055.884 - 9115.462: 2.8811% ( 60) 00:11:33.016 9115.462 - 9175.040: 3.3779% ( 55) 00:11:33.016 9175.040 - 9234.618: 3.7753% ( 44) 00:11:33.016 9234.618 - 9294.196: 4.1637% ( 43) 00:11:33.016 9294.196 - 9353.775: 4.5069% ( 38) 00:11:33.016 9353.775 - 9413.353: 4.9043% ( 44) 00:11:33.016 9413.353 - 9472.931: 5.4552% ( 61) 00:11:33.016 9472.931 - 9532.509: 6.2139% ( 84) 00:11:33.016 9532.509 - 9592.087: 7.1803% ( 107) 00:11:33.016 9592.087 - 9651.665: 8.3363% ( 128) 00:11:33.016 9651.665 - 9711.244: 9.5285% ( 132) 00:11:33.016 9711.244 - 9770.822: 10.6575% ( 125) 00:11:33.016 9770.822 - 9830.400: 11.8858% ( 136) 00:11:33.016 9830.400 - 9889.978: 13.1051% ( 135) 00:11:33.016 9889.978 - 9949.556: 14.2522% ( 127) 00:11:33.016 9949.556 - 10009.135: 15.4444% ( 132) 00:11:33.016 10009.135 - 10068.713: 16.8624% ( 157) 00:11:33.016 10068.713 - 10128.291: 18.2713% ( 156) 00:11:33.016 10128.291 - 10187.869: 19.5990% ( 147) 00:11:33.016 10187.869 - 10247.447: 21.0531% ( 161) 00:11:33.016 10247.447 - 10307.025: 22.5163% ( 162) 00:11:33.016 10307.025 - 10366.604: 24.1420% ( 180) 00:11:33.016 10366.604 - 10426.182: 25.8400% ( 188) 00:11:33.016 10426.182 - 10485.760: 27.5199% ( 186) 00:11:33.016 10485.760 - 10545.338: 29.3624% ( 204) 00:11:33.016 10545.338 - 10604.916: 31.2229% ( 206) 00:11:33.016 10604.916 - 10664.495: 33.3815% ( 239) 00:11:33.016 10664.495 - 10724.073: 35.6575% ( 252) 00:11:33.016 10724.073 - 10783.651: 37.8342% ( 241) 00:11:33.016 10783.651 - 10843.229: 40.0741% ( 248) 00:11:33.016 10843.229 - 10902.807: 42.4133% ( 259) 00:11:33.016 10902.807 - 10962.385: 44.8428% ( 269) 00:11:33.016 10962.385 - 11021.964: 47.1189% ( 252) 00:11:33.016 11021.964 - 11081.542: 49.2955% ( 241) 00:11:33.016 11081.542 - 11141.120: 51.4451% ( 238) 00:11:33.016 11141.120 - 11200.698: 53.4682% ( 224) 00:11:33.016 11200.698 - 11260.276: 55.4462% ( 219) 00:11:33.016 11260.276 - 11319.855: 57.6680% ( 246) 00:11:33.016 11319.855 - 11379.433: 59.7182% ( 227) 00:11:33.016 11379.433 - 11439.011: 61.6239% ( 211) 00:11:33.016 11439.011 - 11498.589: 63.5296% ( 211) 00:11:33.016 11498.589 - 11558.167: 65.4353% ( 211) 00:11:33.016 11558.167 - 11617.745: 67.1423% ( 189) 00:11:33.016 11617.745 - 11677.324: 68.6958% ( 172) 00:11:33.016 11677.324 - 11736.902: 70.3306% ( 181) 00:11:33.016 11736.902 - 11796.480: 71.8118% ( 164) 00:11:33.016 11796.480 - 11856.058: 73.2117% ( 155) 00:11:33.016 11856.058 - 11915.636: 74.6116% ( 155) 00:11:33.016 11915.636 - 11975.215: 75.8851% ( 141) 00:11:33.016 11975.215 - 12034.793: 76.9418% ( 117) 00:11:33.016 12034.793 - 12094.371: 77.9082% ( 107) 00:11:33.016 12094.371 - 12153.949: 78.8475% ( 104) 00:11:33.016 12153.949 - 12213.527: 79.6694% ( 91) 00:11:33.016 12213.527 - 12273.105: 80.3468% ( 75) 00:11:33.017 12273.105 - 12332.684: 81.0965% ( 83) 00:11:33.017 12332.684 - 12392.262: 81.6835% ( 65) 00:11:33.017 12392.262 - 12451.840: 82.2435% ( 62) 00:11:33.017 12451.840 - 12511.418: 82.7854% ( 60) 00:11:33.017 12511.418 - 12570.996: 83.2822% ( 55) 00:11:33.017 12570.996 - 12630.575: 83.7879% ( 56) 00:11:33.017 12630.575 - 12690.153: 84.3298% ( 60) 00:11:33.017 12690.153 - 12749.731: 84.8176% ( 54) 00:11:33.017 12749.731 - 12809.309: 85.2330% ( 46) 00:11:33.017 12809.309 - 12868.887: 85.6124% ( 42) 00:11:33.017 12868.887 - 12928.465: 86.0098% ( 44) 00:11:33.017 12928.465 - 12988.044: 86.3349% ( 36) 00:11:33.017 12988.044 - 13047.622: 86.6420% ( 34) 00:11:33.017 13047.622 - 13107.200: 86.9762% ( 37) 00:11:33.017 13107.200 - 13166.778: 87.3465% ( 41) 00:11:33.017 13166.778 - 13226.356: 87.7077% ( 40) 00:11:33.017 13226.356 - 13285.935: 88.0871% ( 42) 00:11:33.017 13285.935 - 13345.513: 88.4574% ( 41) 00:11:33.017 13345.513 - 13405.091: 88.8548% ( 44) 00:11:33.017 13405.091 - 13464.669: 89.2341% ( 42) 00:11:33.017 13464.669 - 13524.247: 89.6586% ( 47) 00:11:33.017 13524.247 - 13583.825: 90.1824% ( 58) 00:11:33.017 13583.825 - 13643.404: 90.6521% ( 52) 00:11:33.017 13643.404 - 13702.982: 91.2030% ( 61) 00:11:33.017 13702.982 - 13762.560: 91.6456% ( 49) 00:11:33.017 13762.560 - 13822.138: 92.0430% ( 44) 00:11:33.017 13822.138 - 13881.716: 92.3862% ( 38) 00:11:33.017 13881.716 - 13941.295: 92.7023% ( 35) 00:11:33.017 13941.295 - 14000.873: 93.0636% ( 40) 00:11:33.017 14000.873 - 14060.451: 93.4520% ( 43) 00:11:33.017 14060.451 - 14120.029: 93.8403% ( 43) 00:11:33.017 14120.029 - 14179.607: 94.2197% ( 42) 00:11:33.017 14179.607 - 14239.185: 94.5538% ( 37) 00:11:33.017 14239.185 - 14298.764: 94.8338% ( 31) 00:11:33.017 14298.764 - 14358.342: 95.1499% ( 35) 00:11:33.017 14358.342 - 14417.920: 95.4480% ( 33) 00:11:33.017 14417.920 - 14477.498: 95.7551% ( 34) 00:11:33.017 14477.498 - 14537.076: 96.0260% ( 30) 00:11:33.017 14537.076 - 14596.655: 96.3060% ( 31) 00:11:33.017 14596.655 - 14656.233: 96.5589% ( 28) 00:11:33.017 14656.233 - 14715.811: 96.7847% ( 25) 00:11:33.017 14715.811 - 14775.389: 96.9563% ( 19) 00:11:33.017 14775.389 - 14834.967: 97.0285% ( 8) 00:11:33.017 14834.967 - 14894.545: 97.1008% ( 8) 00:11:33.017 14894.545 - 14954.124: 97.1640% ( 7) 00:11:33.017 14954.124 - 15013.702: 97.2453% ( 9) 00:11:33.017 15013.702 - 15073.280: 97.3176% ( 8) 00:11:33.017 15073.280 - 15132.858: 97.3898% ( 8) 00:11:33.017 15132.858 - 15192.436: 97.4530% ( 7) 00:11:33.017 15192.436 - 15252.015: 97.5072% ( 6) 00:11:33.017 15252.015 - 15371.171: 97.6427% ( 15) 00:11:33.017 15371.171 - 15490.327: 97.7782% ( 15) 00:11:33.017 15490.327 - 15609.484: 97.8775% ( 11) 00:11:33.017 15609.484 - 15728.640: 97.9137% ( 4) 00:11:33.017 15728.640 - 15847.796: 97.9678% ( 6) 00:11:33.017 15847.796 - 15966.953: 98.0491% ( 9) 00:11:33.017 15966.953 - 16086.109: 98.1485% ( 11) 00:11:33.017 16086.109 - 16205.265: 98.2388% ( 10) 00:11:33.017 16205.265 - 16324.422: 98.3382% ( 11) 00:11:33.017 16324.422 - 16443.578: 98.4285% ( 10) 00:11:33.017 16443.578 - 16562.735: 98.5098% ( 9) 00:11:33.017 16562.735 - 16681.891: 98.6001% ( 10) 00:11:33.017 16681.891 - 16801.047: 98.6543% ( 6) 00:11:33.017 16801.047 - 16920.204: 98.6994% ( 5) 00:11:33.017 16920.204 - 17039.360: 98.7446% ( 5) 00:11:33.017 17039.360 - 17158.516: 98.7807% ( 4) 00:11:33.017 17158.516 - 17277.673: 98.8259% ( 5) 00:11:33.017 17277.673 - 17396.829: 98.8439% ( 2) 00:11:33.017 30027.404 - 30146.560: 98.8530% ( 1) 00:11:33.017 30146.560 - 30265.716: 98.8801% ( 3) 00:11:33.017 30265.716 - 30384.873: 98.9072% ( 3) 00:11:33.017 30384.873 - 30504.029: 98.9342% ( 3) 00:11:33.017 30504.029 - 30742.342: 98.9975% ( 7) 00:11:33.017 30742.342 - 30980.655: 99.0426% ( 5) 00:11:33.017 30980.655 - 31218.967: 99.1059% ( 7) 00:11:33.017 31218.967 - 31457.280: 99.1600% ( 6) 00:11:33.017 31457.280 - 31695.593: 99.2142% ( 6) 00:11:33.017 31695.593 - 31933.905: 99.2684% ( 6) 00:11:33.017 31933.905 - 32172.218: 99.3316% ( 7) 00:11:33.017 32172.218 - 32410.531: 99.3858% ( 6) 00:11:33.017 32410.531 - 32648.844: 99.4220% ( 4) 00:11:33.017 37891.724 - 38130.036: 99.4852% ( 7) 00:11:33.017 38130.036 - 38368.349: 99.5484% ( 7) 00:11:33.017 38368.349 - 38606.662: 99.6026% ( 6) 00:11:33.017 38606.662 - 38844.975: 99.6568% ( 6) 00:11:33.017 38844.975 - 39083.287: 99.7110% ( 6) 00:11:33.017 39083.287 - 39321.600: 99.7652% ( 6) 00:11:33.017 39321.600 - 39559.913: 99.8284% ( 7) 00:11:33.017 39559.913 - 39798.225: 99.8916% ( 7) 00:11:33.017 39798.225 - 40036.538: 99.9458% ( 6) 00:11:33.017 40036.538 - 40274.851: 100.0000% ( 6) 00:11:33.017 00:11:33.017 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:33.017 ============================================================================== 00:11:33.017 Range in us Cumulative IO count 00:11:33.017 8519.680 - 8579.258: 0.0090% ( 1) 00:11:33.017 8638.836 - 8698.415: 0.0723% ( 7) 00:11:33.017 8698.415 - 8757.993: 0.2529% ( 20) 00:11:33.017 8757.993 - 8817.571: 0.5148% ( 29) 00:11:33.017 8817.571 - 8877.149: 0.8129% ( 33) 00:11:33.017 8877.149 - 8936.727: 1.1922% ( 42) 00:11:33.017 8936.727 - 8996.305: 1.5625% ( 41) 00:11:33.017 8996.305 - 9055.884: 2.0863% ( 58) 00:11:33.017 9055.884 - 9115.462: 2.6192% ( 59) 00:11:33.017 9115.462 - 9175.040: 3.1702% ( 61) 00:11:33.017 9175.040 - 9234.618: 3.7211% ( 61) 00:11:33.017 9234.618 - 9294.196: 4.3082% ( 65) 00:11:33.017 9294.196 - 9353.775: 4.7688% ( 51) 00:11:33.017 9353.775 - 9413.353: 5.2565% ( 54) 00:11:33.017 9413.353 - 9472.931: 5.8978% ( 71) 00:11:33.017 9472.931 - 9532.509: 6.4939% ( 66) 00:11:33.017 9532.509 - 9592.087: 7.3609% ( 96) 00:11:33.017 9592.087 - 9651.665: 8.4357% ( 119) 00:11:33.017 9651.665 - 9711.244: 9.5827% ( 127) 00:11:33.017 9711.244 - 9770.822: 10.8111% ( 136) 00:11:33.017 9770.822 - 9830.400: 12.1207% ( 145) 00:11:33.017 9830.400 - 9889.978: 13.1864% ( 118) 00:11:33.017 9889.978 - 9949.556: 14.2522% ( 118) 00:11:33.017 9949.556 - 10009.135: 15.4082% ( 128) 00:11:33.017 10009.135 - 10068.713: 16.6456% ( 137) 00:11:33.017 10068.713 - 10128.291: 17.7475% ( 122) 00:11:33.017 10128.291 - 10187.869: 18.9035% ( 128) 00:11:33.017 10187.869 - 10247.447: 20.2493% ( 149) 00:11:33.017 10247.447 - 10307.025: 21.7937% ( 171) 00:11:33.017 10307.025 - 10366.604: 23.4827% ( 187) 00:11:33.017 10366.604 - 10426.182: 25.3703% ( 209) 00:11:33.017 10426.182 - 10485.760: 27.1947% ( 202) 00:11:33.017 10485.760 - 10545.338: 29.0643% ( 207) 00:11:33.017 10545.338 - 10604.916: 30.9429% ( 208) 00:11:33.017 10604.916 - 10664.495: 33.1286% ( 242) 00:11:33.017 10664.495 - 10724.073: 35.3775% ( 249) 00:11:33.017 10724.073 - 10783.651: 37.8613% ( 275) 00:11:33.017 10783.651 - 10843.229: 40.2637% ( 266) 00:11:33.017 10843.229 - 10902.807: 42.6842% ( 268) 00:11:33.017 10902.807 - 10962.385: 45.2673% ( 286) 00:11:33.017 10962.385 - 11021.964: 47.8595% ( 287) 00:11:33.017 11021.964 - 11081.542: 50.2168% ( 261) 00:11:33.017 11081.542 - 11141.120: 52.3392% ( 235) 00:11:33.017 11141.120 - 11200.698: 54.5430% ( 244) 00:11:33.017 11200.698 - 11260.276: 56.7106% ( 240) 00:11:33.017 11260.276 - 11319.855: 58.6976% ( 220) 00:11:33.017 11319.855 - 11379.433: 60.6124% ( 212) 00:11:33.017 11379.433 - 11439.011: 62.5181% ( 211) 00:11:33.017 11439.011 - 11498.589: 64.2341% ( 190) 00:11:33.017 11498.589 - 11558.167: 65.9772% ( 193) 00:11:33.017 11558.167 - 11617.745: 67.6572% ( 186) 00:11:33.017 11617.745 - 11677.324: 69.4093% ( 194) 00:11:33.017 11677.324 - 11736.902: 70.9447% ( 170) 00:11:33.017 11736.902 - 11796.480: 72.4621% ( 168) 00:11:33.017 11796.480 - 11856.058: 73.9433% ( 164) 00:11:33.017 11856.058 - 11915.636: 75.1174% ( 130) 00:11:33.017 11915.636 - 11975.215: 76.2554% ( 126) 00:11:33.017 11975.215 - 12034.793: 77.1947% ( 104) 00:11:33.017 12034.793 - 12094.371: 78.0257% ( 92) 00:11:33.017 12094.371 - 12153.949: 78.8114% ( 87) 00:11:33.017 12153.949 - 12213.527: 79.6062% ( 88) 00:11:33.017 12213.527 - 12273.105: 80.3107% ( 78) 00:11:33.017 12273.105 - 12332.684: 80.9429% ( 70) 00:11:33.017 12332.684 - 12392.262: 81.6022% ( 73) 00:11:33.017 12392.262 - 12451.840: 82.2525% ( 72) 00:11:33.017 12451.840 - 12511.418: 82.9570% ( 78) 00:11:33.017 12511.418 - 12570.996: 83.5621% ( 67) 00:11:33.017 12570.996 - 12630.575: 84.1402% ( 64) 00:11:33.017 12630.575 - 12690.153: 84.6279% ( 54) 00:11:33.017 12690.153 - 12749.731: 85.0885% ( 51) 00:11:33.017 12749.731 - 12809.309: 85.4408% ( 39) 00:11:33.017 12809.309 - 12868.887: 85.7749% ( 37) 00:11:33.017 12868.887 - 12928.465: 86.0820% ( 34) 00:11:33.017 12928.465 - 12988.044: 86.4072% ( 36) 00:11:33.017 12988.044 - 13047.622: 86.7504% ( 38) 00:11:33.017 13047.622 - 13107.200: 87.0936% ( 38) 00:11:33.017 13107.200 - 13166.778: 87.4187% ( 36) 00:11:33.017 13166.778 - 13226.356: 87.7619% ( 38) 00:11:33.017 13226.356 - 13285.935: 88.1322% ( 41) 00:11:33.017 13285.935 - 13345.513: 88.4754% ( 38) 00:11:33.017 13345.513 - 13405.091: 88.7464% ( 30) 00:11:33.017 13405.091 - 13464.669: 89.0444% ( 33) 00:11:33.017 13464.669 - 13524.247: 89.3335% ( 32) 00:11:33.017 13524.247 - 13583.825: 89.7941% ( 51) 00:11:33.017 13583.825 - 13643.404: 90.1824% ( 43) 00:11:33.017 13643.404 - 13702.982: 90.5979% ( 46) 00:11:33.017 13702.982 - 13762.560: 91.0224% ( 47) 00:11:33.017 13762.560 - 13822.138: 91.4017% ( 42) 00:11:33.017 13822.138 - 13881.716: 91.7540% ( 39) 00:11:33.017 13881.716 - 13941.295: 92.1152% ( 40) 00:11:33.017 13941.295 - 14000.873: 92.4585% ( 38) 00:11:33.017 14000.873 - 14060.451: 92.7926% ( 37) 00:11:33.017 14060.451 - 14120.029: 93.1900% ( 44) 00:11:33.017 14120.029 - 14179.607: 93.5603% ( 41) 00:11:33.017 14179.607 - 14239.185: 93.9216% ( 40) 00:11:33.017 14239.185 - 14298.764: 94.2377% ( 35) 00:11:33.017 14298.764 - 14358.342: 94.5087% ( 30) 00:11:33.017 14358.342 - 14417.920: 94.8970% ( 43) 00:11:33.017 14417.920 - 14477.498: 95.2312% ( 37) 00:11:33.017 14477.498 - 14537.076: 95.5835% ( 39) 00:11:33.017 14537.076 - 14596.655: 95.9538% ( 41) 00:11:33.017 14596.655 - 14656.233: 96.1976% ( 27) 00:11:33.017 14656.233 - 14715.811: 96.4505% ( 28) 00:11:33.017 14715.811 - 14775.389: 96.6582% ( 23) 00:11:33.018 14775.389 - 14834.967: 96.8298% ( 19) 00:11:33.018 14834.967 - 14894.545: 96.9563% ( 14) 00:11:33.018 14894.545 - 14954.124: 97.0556% ( 11) 00:11:33.018 14954.124 - 15013.702: 97.1279% ( 8) 00:11:33.018 15013.702 - 15073.280: 97.1821% ( 6) 00:11:33.018 15073.280 - 15132.858: 97.2453% ( 7) 00:11:33.018 15132.858 - 15192.436: 97.3085% ( 7) 00:11:33.018 15192.436 - 15252.015: 97.3717% ( 7) 00:11:33.018 15252.015 - 15371.171: 97.5072% ( 15) 00:11:33.018 15371.171 - 15490.327: 97.6337% ( 14) 00:11:33.018 15490.327 - 15609.484: 97.7962% ( 18) 00:11:33.018 15609.484 - 15728.640: 97.8685% ( 8) 00:11:33.018 15728.640 - 15847.796: 97.9408% ( 8) 00:11:33.018 15847.796 - 15966.953: 98.0220% ( 9) 00:11:33.018 15966.953 - 16086.109: 98.1485% ( 14) 00:11:33.018 16086.109 - 16205.265: 98.3020% ( 17) 00:11:33.018 16205.265 - 16324.422: 98.3743% ( 8) 00:11:33.018 16324.422 - 16443.578: 98.4465% ( 8) 00:11:33.018 16443.578 - 16562.735: 98.5098% ( 7) 00:11:33.018 16562.735 - 16681.891: 98.6001% ( 10) 00:11:33.018 16681.891 - 16801.047: 98.6452% ( 5) 00:11:33.018 16801.047 - 16920.204: 98.6814% ( 4) 00:11:33.018 16920.204 - 17039.360: 98.7265% ( 5) 00:11:33.018 17039.360 - 17158.516: 98.7626% ( 4) 00:11:33.018 17158.516 - 17277.673: 98.8078% ( 5) 00:11:33.018 17277.673 - 17396.829: 98.8439% ( 4) 00:11:33.018 27644.276 - 27763.433: 98.8530% ( 1) 00:11:33.018 27763.433 - 27882.589: 98.8801% ( 3) 00:11:33.018 27882.589 - 28001.745: 98.9072% ( 3) 00:11:33.018 28001.745 - 28120.902: 98.9342% ( 3) 00:11:33.018 28120.902 - 28240.058: 98.9613% ( 3) 00:11:33.018 28240.058 - 28359.215: 98.9884% ( 3) 00:11:33.018 28359.215 - 28478.371: 99.0155% ( 3) 00:11:33.018 28478.371 - 28597.527: 99.0426% ( 3) 00:11:33.018 28597.527 - 28716.684: 99.0788% ( 4) 00:11:33.018 28716.684 - 28835.840: 99.1059% ( 3) 00:11:33.018 28835.840 - 28954.996: 99.1329% ( 3) 00:11:33.018 28954.996 - 29074.153: 99.1510% ( 2) 00:11:33.018 29074.153 - 29193.309: 99.1781% ( 3) 00:11:33.018 29193.309 - 29312.465: 99.2052% ( 3) 00:11:33.018 29312.465 - 29431.622: 99.2323% ( 3) 00:11:33.018 29431.622 - 29550.778: 99.2684% ( 4) 00:11:33.018 29550.778 - 29669.935: 99.2955% ( 3) 00:11:33.018 29669.935 - 29789.091: 99.3226% ( 3) 00:11:33.018 29789.091 - 29908.247: 99.3407% ( 2) 00:11:33.018 29908.247 - 30027.404: 99.3768% ( 4) 00:11:33.018 30027.404 - 30146.560: 99.4039% ( 3) 00:11:33.018 30146.560 - 30265.716: 99.4220% ( 2) 00:11:33.018 35270.284 - 35508.596: 99.4491% ( 3) 00:11:33.018 35508.596 - 35746.909: 99.5123% ( 7) 00:11:33.018 35746.909 - 35985.222: 99.5755% ( 7) 00:11:33.018 35985.222 - 36223.535: 99.6387% ( 7) 00:11:33.018 36223.535 - 36461.847: 99.6929% ( 6) 00:11:33.018 36461.847 - 36700.160: 99.7561% ( 7) 00:11:33.018 36700.160 - 36938.473: 99.8194% ( 7) 00:11:33.018 36938.473 - 37176.785: 99.8736% ( 6) 00:11:33.018 37176.785 - 37415.098: 99.9277% ( 6) 00:11:33.018 37415.098 - 37653.411: 99.9910% ( 7) 00:11:33.018 37653.411 - 37891.724: 100.0000% ( 1) 00:11:33.018 00:11:33.018 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:33.018 ============================================================================== 00:11:33.018 Range in us Cumulative IO count 00:11:33.018 8579.258 - 8638.836: 0.0359% ( 4) 00:11:33.018 8638.836 - 8698.415: 0.1078% ( 8) 00:11:33.018 8698.415 - 8757.993: 0.2784% ( 19) 00:11:33.018 8757.993 - 8817.571: 0.5568% ( 31) 00:11:33.018 8817.571 - 8877.149: 0.9429% ( 43) 00:11:33.018 8877.149 - 8936.727: 1.3649% ( 47) 00:11:33.018 8936.727 - 8996.305: 1.7960% ( 48) 00:11:33.018 8996.305 - 9055.884: 2.2540% ( 51) 00:11:33.018 9055.884 - 9115.462: 2.7119% ( 51) 00:11:33.018 9115.462 - 9175.040: 3.1519% ( 49) 00:11:33.018 9175.040 - 9234.618: 3.5740% ( 47) 00:11:33.018 9234.618 - 9294.196: 4.0050% ( 48) 00:11:33.018 9294.196 - 9353.775: 4.4540% ( 50) 00:11:33.018 9353.775 - 9413.353: 4.9479% ( 55) 00:11:33.018 9413.353 - 9472.931: 5.5406% ( 66) 00:11:33.018 9472.931 - 9532.509: 6.2051% ( 74) 00:11:33.018 9532.509 - 9592.087: 7.1570% ( 106) 00:11:33.018 9592.087 - 9651.665: 8.2525% ( 122) 00:11:33.018 9651.665 - 9711.244: 9.5097% ( 140) 00:11:33.018 9711.244 - 9770.822: 10.7759% ( 141) 00:11:33.018 9770.822 - 9830.400: 11.8355% ( 118) 00:11:33.018 9830.400 - 9889.978: 12.8412% ( 112) 00:11:33.018 9889.978 - 9949.556: 13.9098% ( 119) 00:11:33.018 9949.556 - 10009.135: 15.1221% ( 135) 00:11:33.018 10009.135 - 10068.713: 16.4511% ( 148) 00:11:33.018 10068.713 - 10128.291: 17.7173% ( 141) 00:11:33.018 10128.291 - 10187.869: 18.9745% ( 140) 00:11:33.018 10187.869 - 10247.447: 20.2945% ( 147) 00:11:33.018 10247.447 - 10307.025: 21.7313% ( 160) 00:11:33.018 10307.025 - 10366.604: 23.4375% ( 190) 00:11:33.018 10366.604 - 10426.182: 25.2245% ( 199) 00:11:33.018 10426.182 - 10485.760: 27.0295% ( 201) 00:11:33.018 10485.760 - 10545.338: 28.9152% ( 210) 00:11:33.018 10545.338 - 10604.916: 30.8728% ( 218) 00:11:33.018 10604.916 - 10664.495: 32.8843% ( 224) 00:11:33.018 10664.495 - 10724.073: 35.0126% ( 237) 00:11:33.018 10724.073 - 10783.651: 37.4461% ( 271) 00:11:33.018 10783.651 - 10843.229: 39.8438% ( 267) 00:11:33.018 10843.229 - 10902.807: 42.5018% ( 296) 00:11:33.018 10902.807 - 10962.385: 44.8276% ( 259) 00:11:33.018 10962.385 - 11021.964: 47.2971% ( 275) 00:11:33.018 11021.964 - 11081.542: 49.5061% ( 246) 00:11:33.018 11081.542 - 11141.120: 51.6882% ( 243) 00:11:33.018 11141.120 - 11200.698: 53.7895% ( 234) 00:11:33.018 11200.698 - 11260.276: 55.6843% ( 211) 00:11:33.018 11260.276 - 11319.855: 57.5880% ( 212) 00:11:33.018 11319.855 - 11379.433: 59.7522% ( 241) 00:11:33.018 11379.433 - 11439.011: 61.7636% ( 224) 00:11:33.018 11439.011 - 11498.589: 63.6764% ( 213) 00:11:33.018 11498.589 - 11558.167: 65.6789% ( 223) 00:11:33.018 11558.167 - 11617.745: 67.5557% ( 209) 00:11:33.018 11617.745 - 11677.324: 69.3696% ( 202) 00:11:33.018 11677.324 - 11736.902: 70.9411% ( 175) 00:11:33.018 11736.902 - 11796.480: 72.5036% ( 174) 00:11:33.018 11796.480 - 11856.058: 74.1559% ( 184) 00:11:33.018 11856.058 - 11915.636: 75.5388% ( 154) 00:11:33.018 11915.636 - 11975.215: 76.7601% ( 136) 00:11:33.018 11975.215 - 12034.793: 77.7029% ( 105) 00:11:33.018 12034.793 - 12094.371: 78.4842% ( 87) 00:11:33.018 12094.371 - 12153.949: 79.2026% ( 80) 00:11:33.018 12153.949 - 12213.527: 79.8042% ( 67) 00:11:33.018 12213.527 - 12273.105: 80.3700% ( 63) 00:11:33.018 12273.105 - 12332.684: 80.8998% ( 59) 00:11:33.018 12332.684 - 12392.262: 81.4296% ( 59) 00:11:33.018 12392.262 - 12451.840: 81.9774% ( 61) 00:11:33.018 12451.840 - 12511.418: 82.4802% ( 56) 00:11:33.018 12511.418 - 12570.996: 82.9741% ( 55) 00:11:33.018 12570.996 - 12630.575: 83.5129% ( 60) 00:11:33.018 12630.575 - 12690.153: 84.0338% ( 58) 00:11:33.018 12690.153 - 12749.731: 84.5277% ( 55) 00:11:33.018 12749.731 - 12809.309: 84.9407% ( 46) 00:11:33.018 12809.309 - 12868.887: 85.3987% ( 51) 00:11:33.018 12868.887 - 12928.465: 85.8477% ( 50) 00:11:33.018 12928.465 - 12988.044: 86.2069% ( 40) 00:11:33.018 12988.044 - 13047.622: 86.5571% ( 39) 00:11:33.018 13047.622 - 13107.200: 86.8894% ( 37) 00:11:33.018 13107.200 - 13166.778: 87.2216% ( 37) 00:11:33.018 13166.778 - 13226.356: 87.6078% ( 43) 00:11:33.018 13226.356 - 13285.935: 87.9849% ( 42) 00:11:33.018 13285.935 - 13345.513: 88.3351% ( 39) 00:11:33.018 13345.513 - 13405.091: 88.7123% ( 42) 00:11:33.018 13405.091 - 13464.669: 89.2331% ( 58) 00:11:33.018 13464.669 - 13524.247: 89.7180% ( 54) 00:11:33.018 13524.247 - 13583.825: 90.0862% ( 41) 00:11:33.018 13583.825 - 13643.404: 90.5083% ( 47) 00:11:33.018 13643.404 - 13702.982: 90.9662% ( 51) 00:11:33.018 13702.982 - 13762.560: 91.3524% ( 43) 00:11:33.018 13762.560 - 13822.138: 91.8014% ( 50) 00:11:33.018 13822.138 - 13881.716: 92.1157% ( 35) 00:11:33.018 13881.716 - 13941.295: 92.4838% ( 41) 00:11:33.018 13941.295 - 14000.873: 92.7981% ( 35) 00:11:33.018 14000.873 - 14060.451: 93.1214% ( 36) 00:11:33.018 14060.451 - 14120.029: 93.4896% ( 41) 00:11:33.018 14120.029 - 14179.607: 93.8129% ( 36) 00:11:33.018 14179.607 - 14239.185: 94.1631% ( 39) 00:11:33.018 14239.185 - 14298.764: 94.4504% ( 32) 00:11:33.018 14298.764 - 14358.342: 94.7737% ( 36) 00:11:33.018 14358.342 - 14417.920: 95.1060% ( 37) 00:11:33.018 14417.920 - 14477.498: 95.4203% ( 35) 00:11:33.018 14477.498 - 14537.076: 95.7435% ( 36) 00:11:33.018 14537.076 - 14596.655: 96.0489% ( 34) 00:11:33.018 14596.655 - 14656.233: 96.2823% ( 26) 00:11:33.018 14656.233 - 14715.811: 96.4889% ( 23) 00:11:33.018 14715.811 - 14775.389: 96.6325% ( 16) 00:11:33.018 14775.389 - 14834.967: 96.7493% ( 13) 00:11:33.018 14834.967 - 14894.545: 96.8570% ( 12) 00:11:33.018 14894.545 - 14954.124: 96.9468% ( 10) 00:11:33.018 14954.124 - 15013.702: 97.0456% ( 11) 00:11:33.018 15013.702 - 15073.280: 97.1354% ( 10) 00:11:33.018 15073.280 - 15132.858: 97.2073% ( 8) 00:11:33.018 15132.858 - 15192.436: 97.2791% ( 8) 00:11:33.018 15192.436 - 15252.015: 97.3599% ( 9) 00:11:33.018 15252.015 - 15371.171: 97.5126% ( 17) 00:11:33.018 15371.171 - 15490.327: 97.6114% ( 11) 00:11:33.018 15490.327 - 15609.484: 97.7011% ( 10) 00:11:33.018 15609.484 - 15728.640: 97.7999% ( 11) 00:11:33.018 15728.640 - 15847.796: 97.8718% ( 8) 00:11:33.018 15847.796 - 15966.953: 98.0244% ( 17) 00:11:33.018 15966.953 - 16086.109: 98.2130% ( 21) 00:11:33.018 16086.109 - 16205.265: 98.3297% ( 13) 00:11:33.018 16205.265 - 16324.422: 98.4016% ( 8) 00:11:33.018 16324.422 - 16443.578: 98.4734% ( 8) 00:11:33.018 16443.578 - 16562.735: 98.5542% ( 9) 00:11:33.018 16562.735 - 16681.891: 98.6351% ( 9) 00:11:33.018 16681.891 - 16801.047: 98.6889% ( 6) 00:11:33.018 16801.047 - 16920.204: 98.7249% ( 4) 00:11:33.019 16920.204 - 17039.360: 98.7698% ( 5) 00:11:33.019 17039.360 - 17158.516: 98.8147% ( 5) 00:11:33.019 17158.516 - 17277.673: 98.8506% ( 4) 00:11:33.019 19065.018 - 19184.175: 98.8596% ( 1) 00:11:33.019 19184.175 - 19303.331: 98.8775% ( 2) 00:11:33.019 19303.331 - 19422.487: 98.9045% ( 3) 00:11:33.019 19422.487 - 19541.644: 98.9314% ( 3) 00:11:33.019 19541.644 - 19660.800: 98.9583% ( 3) 00:11:33.019 19660.800 - 19779.956: 98.9943% ( 4) 00:11:33.019 19779.956 - 19899.113: 99.0212% ( 3) 00:11:33.019 19899.113 - 20018.269: 99.0481% ( 3) 00:11:33.019 20018.269 - 20137.425: 99.0841% ( 4) 00:11:33.019 20137.425 - 20256.582: 99.1110% ( 3) 00:11:33.019 20256.582 - 20375.738: 99.1379% ( 3) 00:11:33.019 20375.738 - 20494.895: 99.1739% ( 4) 00:11:33.019 20494.895 - 20614.051: 99.2008% ( 3) 00:11:33.019 20614.051 - 20733.207: 99.2277% ( 3) 00:11:33.019 20733.207 - 20852.364: 99.2547% ( 3) 00:11:33.019 20852.364 - 20971.520: 99.2816% ( 3) 00:11:33.019 20971.520 - 21090.676: 99.3085% ( 3) 00:11:33.019 21090.676 - 21209.833: 99.3355% ( 3) 00:11:33.019 21209.833 - 21328.989: 99.3624% ( 3) 00:11:33.019 21328.989 - 21448.145: 99.3983% ( 4) 00:11:33.019 21448.145 - 21567.302: 99.4253% ( 3) 00:11:33.019 26929.338 - 27048.495: 99.4522% ( 3) 00:11:33.019 27048.495 - 27167.651: 99.4792% ( 3) 00:11:33.019 27167.651 - 27286.807: 99.5061% ( 3) 00:11:33.019 27286.807 - 27405.964: 99.5330% ( 3) 00:11:33.019 27405.964 - 27525.120: 99.5690% ( 4) 00:11:33.019 27525.120 - 27644.276: 99.5869% ( 2) 00:11:33.019 27644.276 - 27763.433: 99.6139% ( 3) 00:11:33.019 27763.433 - 27882.589: 99.6408% ( 3) 00:11:33.019 27882.589 - 28001.745: 99.6767% ( 4) 00:11:33.019 28001.745 - 28120.902: 99.7037% ( 3) 00:11:33.019 28120.902 - 28240.058: 99.7306% ( 3) 00:11:33.019 28240.058 - 28359.215: 99.7575% ( 3) 00:11:33.019 28359.215 - 28478.371: 99.7845% ( 3) 00:11:33.019 28478.371 - 28597.527: 99.8114% ( 3) 00:11:33.019 28597.527 - 28716.684: 99.8384% ( 3) 00:11:33.019 28716.684 - 28835.840: 99.8653% ( 3) 00:11:33.019 28835.840 - 28954.996: 99.8922% ( 3) 00:11:33.019 28954.996 - 29074.153: 99.9192% ( 3) 00:11:33.019 29074.153 - 29193.309: 99.9461% ( 3) 00:11:33.019 29193.309 - 29312.465: 99.9731% ( 3) 00:11:33.019 29312.465 - 29431.622: 100.0000% ( 3) 00:11:33.019 00:11:33.019 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:33.019 ============================================================================== 00:11:33.019 Range in us Cumulative IO count 00:11:33.019 8519.680 - 8579.258: 0.0090% ( 1) 00:11:33.019 8579.258 - 8638.836: 0.0269% ( 2) 00:11:33.019 8638.836 - 8698.415: 0.1078% ( 9) 00:11:33.019 8698.415 - 8757.993: 0.3323% ( 25) 00:11:33.019 8757.993 - 8817.571: 0.5837% ( 28) 00:11:33.019 8817.571 - 8877.149: 0.9608% ( 42) 00:11:33.019 8877.149 - 8936.727: 1.3919% ( 48) 00:11:33.019 8936.727 - 8996.305: 1.8319% ( 49) 00:11:33.019 8996.305 - 9055.884: 2.2450% ( 46) 00:11:33.019 9055.884 - 9115.462: 2.6221% ( 42) 00:11:33.019 9115.462 - 9175.040: 3.1968% ( 64) 00:11:33.019 9175.040 - 9234.618: 3.6458% ( 50) 00:11:33.019 9234.618 - 9294.196: 4.1577% ( 57) 00:11:33.019 9294.196 - 9353.775: 4.6157% ( 51) 00:11:33.019 9353.775 - 9413.353: 5.0198% ( 45) 00:11:33.019 9413.353 - 9472.931: 5.4777% ( 51) 00:11:33.019 9472.931 - 9532.509: 6.0524% ( 64) 00:11:33.019 9532.509 - 9592.087: 6.6900% ( 71) 00:11:33.019 9592.087 - 9651.665: 7.7407% ( 117) 00:11:33.019 9651.665 - 9711.244: 9.0248% ( 143) 00:11:33.019 9711.244 - 9770.822: 10.4346% ( 157) 00:11:33.019 9770.822 - 9830.400: 11.4943% ( 118) 00:11:33.019 9830.400 - 9889.978: 12.7604% ( 141) 00:11:33.019 9889.978 - 9949.556: 14.0625% ( 145) 00:11:33.019 9949.556 - 10009.135: 15.3556% ( 144) 00:11:33.019 10009.135 - 10068.713: 16.6936% ( 149) 00:11:33.019 10068.713 - 10128.291: 17.9328% ( 138) 00:11:33.019 10128.291 - 10187.869: 19.1721% ( 138) 00:11:33.019 10187.869 - 10247.447: 20.4562% ( 143) 00:11:33.019 10247.447 - 10307.025: 22.0277% ( 175) 00:11:33.019 10307.025 - 10366.604: 23.7338% ( 190) 00:11:33.019 10366.604 - 10426.182: 25.4580% ( 192) 00:11:33.019 10426.182 - 10485.760: 27.2989% ( 205) 00:11:33.019 10485.760 - 10545.338: 29.1756% ( 209) 00:11:33.019 10545.338 - 10604.916: 31.0884% ( 213) 00:11:33.019 10604.916 - 10664.495: 32.9382% ( 206) 00:11:33.019 10664.495 - 10724.073: 34.8240% ( 210) 00:11:33.019 10724.073 - 10783.651: 37.1318% ( 257) 00:11:33.019 10783.651 - 10843.229: 39.5384% ( 268) 00:11:33.019 10843.229 - 10902.807: 42.0438% ( 279) 00:11:33.019 10902.807 - 10962.385: 44.4774% ( 271) 00:11:33.019 10962.385 - 11021.964: 46.9109% ( 271) 00:11:33.019 11021.964 - 11081.542: 49.2457% ( 260) 00:11:33.019 11081.542 - 11141.120: 51.6792% ( 271) 00:11:33.019 11141.120 - 11200.698: 53.8524% ( 242) 00:11:33.019 11200.698 - 11260.276: 55.9986% ( 239) 00:11:33.019 11260.276 - 11319.855: 58.0370% ( 227) 00:11:33.019 11319.855 - 11379.433: 59.9767% ( 216) 00:11:33.019 11379.433 - 11439.011: 61.9792% ( 223) 00:11:33.019 11439.011 - 11498.589: 63.8200% ( 205) 00:11:33.019 11498.589 - 11558.167: 65.5801% ( 196) 00:11:33.019 11558.167 - 11617.745: 67.4389% ( 207) 00:11:33.019 11617.745 - 11677.324: 69.1990% ( 196) 00:11:33.019 11677.324 - 11736.902: 71.0309% ( 204) 00:11:33.019 11736.902 - 11796.480: 72.6293% ( 178) 00:11:33.019 11796.480 - 11856.058: 73.9224% ( 144) 00:11:33.019 11856.058 - 11915.636: 75.0988% ( 131) 00:11:33.019 11915.636 - 11975.215: 76.4996% ( 156) 00:11:33.019 11975.215 - 12034.793: 77.6401% ( 127) 00:11:33.019 12034.793 - 12094.371: 78.4752% ( 93) 00:11:33.019 12094.371 - 12153.949: 79.2744% ( 89) 00:11:33.019 12153.949 - 12213.527: 80.0198% ( 83) 00:11:33.019 12213.527 - 12273.105: 80.6394% ( 69) 00:11:33.019 12273.105 - 12332.684: 81.1871% ( 61) 00:11:33.019 12332.684 - 12392.262: 81.6631% ( 53) 00:11:33.019 12392.262 - 12451.840: 82.1929% ( 59) 00:11:33.019 12451.840 - 12511.418: 82.7945% ( 67) 00:11:33.019 12511.418 - 12570.996: 83.3333% ( 60) 00:11:33.019 12570.996 - 12630.575: 83.7913% ( 51) 00:11:33.019 12630.575 - 12690.153: 84.2852% ( 55) 00:11:33.019 12690.153 - 12749.731: 84.7522% ( 52) 00:11:33.019 12749.731 - 12809.309: 85.1832% ( 48) 00:11:33.019 12809.309 - 12868.887: 85.5065% ( 36) 00:11:33.019 12868.887 - 12928.465: 85.8836% ( 42) 00:11:33.019 12928.465 - 12988.044: 86.2877% ( 45) 00:11:33.019 12988.044 - 13047.622: 86.6379% ( 39) 00:11:33.019 13047.622 - 13107.200: 86.9432% ( 34) 00:11:33.019 13107.200 - 13166.778: 87.3384% ( 44) 00:11:33.019 13166.778 - 13226.356: 87.6437% ( 34) 00:11:33.019 13226.356 - 13285.935: 87.9310% ( 32) 00:11:33.019 13285.935 - 13345.513: 88.2184% ( 32) 00:11:33.019 13345.513 - 13405.091: 88.5686% ( 39) 00:11:33.019 13405.091 - 13464.669: 88.9637% ( 44) 00:11:33.019 13464.669 - 13524.247: 89.4037% ( 49) 00:11:33.019 13524.247 - 13583.825: 89.7809% ( 42) 00:11:33.019 13583.825 - 13643.404: 90.1850% ( 45) 00:11:33.019 13643.404 - 13702.982: 90.7597% ( 64) 00:11:33.019 13702.982 - 13762.560: 91.1638% ( 45) 00:11:33.019 13762.560 - 13822.138: 91.5140% ( 39) 00:11:33.019 13822.138 - 13881.716: 91.8642% ( 39) 00:11:33.019 13881.716 - 13941.295: 92.2055% ( 38) 00:11:33.019 13941.295 - 14000.873: 92.5647% ( 40) 00:11:33.019 14000.873 - 14060.451: 92.9149% ( 39) 00:11:33.019 14060.451 - 14120.029: 93.2471% ( 37) 00:11:33.019 14120.029 - 14179.607: 93.5884% ( 38) 00:11:33.019 14179.607 - 14239.185: 93.9565% ( 41) 00:11:33.019 14239.185 - 14298.764: 94.2888% ( 37) 00:11:33.019 14298.764 - 14358.342: 94.6480% ( 40) 00:11:33.019 14358.342 - 14417.920: 94.9802% ( 37) 00:11:33.019 14417.920 - 14477.498: 95.2496% ( 30) 00:11:33.019 14477.498 - 14537.076: 95.5639% ( 35) 00:11:33.019 14537.076 - 14596.655: 95.8872% ( 36) 00:11:33.019 14596.655 - 14656.233: 96.1656% ( 31) 00:11:33.019 14656.233 - 14715.811: 96.4260% ( 29) 00:11:33.019 14715.811 - 14775.389: 96.6505% ( 25) 00:11:33.019 14775.389 - 14834.967: 96.8121% ( 18) 00:11:33.019 14834.967 - 14894.545: 96.9828% ( 19) 00:11:33.019 14894.545 - 14954.124: 97.1444% ( 18) 00:11:33.019 14954.124 - 15013.702: 97.2252% ( 9) 00:11:33.019 15013.702 - 15073.280: 97.3150% ( 10) 00:11:33.019 15073.280 - 15132.858: 97.3869% ( 8) 00:11:33.019 15132.858 - 15192.436: 97.4407% ( 6) 00:11:33.019 15192.436 - 15252.015: 97.4856% ( 5) 00:11:33.019 15252.015 - 15371.171: 97.5754% ( 10) 00:11:33.019 15371.171 - 15490.327: 97.6203% ( 5) 00:11:33.019 15490.327 - 15609.484: 97.7101% ( 10) 00:11:33.019 15609.484 - 15728.640: 97.8179% ( 12) 00:11:33.019 15728.640 - 15847.796: 97.8897% ( 8) 00:11:33.019 15847.796 - 15966.953: 97.9526% ( 7) 00:11:33.020 15966.953 - 16086.109: 98.0514% ( 11) 00:11:33.020 16086.109 - 16205.265: 98.1681% ( 13) 00:11:33.020 16205.265 - 16324.422: 98.2759% ( 12) 00:11:33.020 16324.422 - 16443.578: 98.3746% ( 11) 00:11:33.020 16443.578 - 16562.735: 98.4644% ( 10) 00:11:33.020 16562.735 - 16681.891: 98.5453% ( 9) 00:11:33.020 16681.891 - 16801.047: 98.6710% ( 14) 00:11:33.020 16801.047 - 16920.204: 98.7787% ( 12) 00:11:33.020 16920.204 - 17039.360: 98.8416% ( 7) 00:11:33.020 17039.360 - 17158.516: 98.8955% ( 6) 00:11:33.020 17158.516 - 17277.673: 98.9673% ( 8) 00:11:33.020 17277.673 - 17396.829: 99.0212% ( 6) 00:11:33.020 17396.829 - 17515.985: 99.0571% ( 4) 00:11:33.020 17515.985 - 17635.142: 99.0841% ( 3) 00:11:33.020 17635.142 - 17754.298: 99.1110% ( 3) 00:11:33.020 17754.298 - 17873.455: 99.1379% ( 3) 00:11:33.020 17873.455 - 17992.611: 99.1649% ( 3) 00:11:33.020 17992.611 - 18111.767: 99.1918% ( 3) 00:11:33.020 18111.767 - 18230.924: 99.2188% ( 3) 00:11:33.020 18230.924 - 18350.080: 99.2547% ( 4) 00:11:33.020 18350.080 - 18469.236: 99.2816% ( 3) 00:11:33.020 18469.236 - 18588.393: 99.3085% ( 3) 00:11:33.020 18588.393 - 18707.549: 99.3355% ( 3) 00:11:33.020 18707.549 - 18826.705: 99.3624% ( 3) 00:11:33.020 18826.705 - 18945.862: 99.3894% ( 3) 00:11:33.020 18945.862 - 19065.018: 99.4163% ( 3) 00:11:33.020 19065.018 - 19184.175: 99.4253% ( 1) 00:11:33.020 24427.055 - 24546.211: 99.4343% ( 1) 00:11:33.020 24546.211 - 24665.367: 99.4612% ( 3) 00:11:33.020 24665.367 - 24784.524: 99.4881% ( 3) 00:11:33.020 24784.524 - 24903.680: 99.5151% ( 3) 00:11:33.020 24903.680 - 25022.836: 99.5420% ( 3) 00:11:33.020 25022.836 - 25141.993: 99.5690% ( 3) 00:11:33.020 25141.993 - 25261.149: 99.5959% ( 3) 00:11:33.020 25261.149 - 25380.305: 99.6228% ( 3) 00:11:33.020 25380.305 - 25499.462: 99.6498% ( 3) 00:11:33.020 25499.462 - 25618.618: 99.6767% ( 3) 00:11:33.020 25618.618 - 25737.775: 99.7037% ( 3) 00:11:33.020 25737.775 - 25856.931: 99.7306% ( 3) 00:11:33.020 25856.931 - 25976.087: 99.7575% ( 3) 00:11:33.020 25976.087 - 26095.244: 99.7935% ( 4) 00:11:33.020 26095.244 - 26214.400: 99.8204% ( 3) 00:11:33.020 26214.400 - 26333.556: 99.8473% ( 3) 00:11:33.020 26333.556 - 26452.713: 99.8653% ( 2) 00:11:33.020 26452.713 - 26571.869: 99.9012% ( 4) 00:11:33.020 26571.869 - 26691.025: 99.9282% ( 3) 00:11:33.020 26691.025 - 26810.182: 99.9461% ( 2) 00:11:33.020 26810.182 - 26929.338: 99.9820% ( 4) 00:11:33.020 26929.338 - 27048.495: 100.0000% ( 2) 00:11:33.020 00:11:33.020 09:17:19 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:33.020 ************************************ 00:11:33.020 END TEST nvme_perf 00:11:33.020 ************************************ 00:11:33.020 00:11:33.020 real 0m2.672s 00:11:33.020 user 0m2.299s 00:11:33.020 sys 0m0.265s 00:11:33.020 09:17:19 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.020 09:17:19 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:33.020 09:17:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:33.020 09:17:19 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:33.020 09:17:19 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:33.020 09:17:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.020 09:17:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.020 ************************************ 00:11:33.020 START TEST nvme_hello_world 00:11:33.020 ************************************ 00:11:33.020 09:17:19 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:33.279 Initializing NVMe Controllers 00:11:33.279 Attached to 0000:00:10.0 00:11:33.279 Namespace ID: 1 size: 6GB 00:11:33.279 Attached to 0000:00:11.0 00:11:33.279 Namespace ID: 1 size: 5GB 00:11:33.279 Attached to 0000:00:13.0 00:11:33.279 Namespace ID: 1 size: 1GB 00:11:33.279 Attached to 0000:00:12.0 00:11:33.279 Namespace ID: 1 size: 4GB 00:11:33.279 Namespace ID: 2 size: 4GB 00:11:33.279 Namespace ID: 3 size: 4GB 00:11:33.279 Initialization complete. 00:11:33.279 INFO: using host memory buffer for IO 00:11:33.279 Hello world! 00:11:33.279 INFO: using host memory buffer for IO 00:11:33.279 Hello world! 00:11:33.279 INFO: using host memory buffer for IO 00:11:33.279 Hello world! 00:11:33.279 INFO: using host memory buffer for IO 00:11:33.279 Hello world! 00:11:33.279 INFO: using host memory buffer for IO 00:11:33.279 Hello world! 00:11:33.279 INFO: using host memory buffer for IO 00:11:33.279 Hello world! 00:11:33.279 ************************************ 00:11:33.279 END TEST nvme_hello_world 00:11:33.279 ************************************ 00:11:33.279 00:11:33.279 real 0m0.318s 00:11:33.279 user 0m0.137s 00:11:33.279 sys 0m0.128s 00:11:33.279 09:17:19 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.279 09:17:19 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:33.279 09:17:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:33.279 09:17:19 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:33.279 09:17:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.279 09:17:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.279 09:17:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.279 ************************************ 00:11:33.279 START TEST nvme_sgl 00:11:33.279 ************************************ 00:11:33.279 09:17:19 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:33.537 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:33.537 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:33.537 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:33.537 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:33.537 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:33.537 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:33.537 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:33.537 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:33.797 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:33.797 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:33.797 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:33.797 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:33.797 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:33.797 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:33.797 NVMe Readv/Writev Request test 00:11:33.797 Attached to 0000:00:10.0 00:11:33.797 Attached to 0000:00:11.0 00:11:33.797 Attached to 0000:00:13.0 00:11:33.797 Attached to 0000:00:12.0 00:11:33.797 0000:00:10.0: build_io_request_2 test passed 00:11:33.797 0000:00:10.0: build_io_request_4 test passed 00:11:33.797 0000:00:10.0: build_io_request_5 test passed 00:11:33.797 0000:00:10.0: build_io_request_6 test passed 00:11:33.797 0000:00:10.0: build_io_request_7 test passed 00:11:33.797 0000:00:10.0: build_io_request_10 test passed 00:11:33.797 0000:00:11.0: build_io_request_2 test passed 00:11:33.797 0000:00:11.0: build_io_request_4 test passed 00:11:33.797 0000:00:11.0: build_io_request_5 test passed 00:11:33.797 0000:00:11.0: build_io_request_6 test passed 00:11:33.797 0000:00:11.0: build_io_request_7 test passed 00:11:33.797 0000:00:11.0: build_io_request_10 test passed 00:11:33.797 Cleaning up... 00:11:33.797 ************************************ 00:11:33.797 END TEST nvme_sgl 00:11:33.797 ************************************ 00:11:33.797 00:11:33.797 real 0m0.406s 00:11:33.797 user 0m0.228s 00:11:33.797 sys 0m0.129s 00:11:33.797 09:17:19 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.797 09:17:19 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:33.797 09:17:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:33.797 09:17:20 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:33.797 09:17:20 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.797 09:17:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.797 09:17:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:33.797 ************************************ 00:11:33.797 START TEST nvme_e2edp 00:11:33.797 ************************************ 00:11:33.797 09:17:20 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:34.056 NVMe Write/Read with End-to-End data protection test 00:11:34.056 Attached to 0000:00:10.0 00:11:34.056 Attached to 0000:00:11.0 00:11:34.056 Attached to 0000:00:13.0 00:11:34.056 Attached to 0000:00:12.0 00:11:34.056 Cleaning up... 00:11:34.056 00:11:34.056 real 0m0.290s 00:11:34.056 user 0m0.121s 00:11:34.056 sys 0m0.121s 00:11:34.056 09:17:20 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.056 ************************************ 00:11:34.056 END TEST nvme_e2edp 00:11:34.056 ************************************ 00:11:34.056 09:17:20 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:34.056 09:17:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:34.056 09:17:20 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:34.056 09:17:20 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:34.056 09:17:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.056 09:17:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.056 ************************************ 00:11:34.056 START TEST nvme_reserve 00:11:34.056 ************************************ 00:11:34.056 09:17:20 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:34.315 ===================================================== 00:11:34.315 NVMe Controller at PCI bus 0, device 16, function 0 00:11:34.315 ===================================================== 00:11:34.315 Reservations: Not Supported 00:11:34.315 ===================================================== 00:11:34.315 NVMe Controller at PCI bus 0, device 17, function 0 00:11:34.315 ===================================================== 00:11:34.315 Reservations: Not Supported 00:11:34.315 ===================================================== 00:11:34.315 NVMe Controller at PCI bus 0, device 19, function 0 00:11:34.315 ===================================================== 00:11:34.315 Reservations: Not Supported 00:11:34.315 ===================================================== 00:11:34.315 NVMe Controller at PCI bus 0, device 18, function 0 00:11:34.315 ===================================================== 00:11:34.315 Reservations: Not Supported 00:11:34.315 Reservation test passed 00:11:34.315 00:11:34.315 real 0m0.242s 00:11:34.315 user 0m0.085s 00:11:34.315 sys 0m0.117s 00:11:34.315 ************************************ 00:11:34.315 END TEST nvme_reserve 00:11:34.315 ************************************ 00:11:34.315 09:17:20 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.315 09:17:20 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:34.315 09:17:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:34.315 09:17:20 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:34.315 09:17:20 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:34.315 09:17:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.315 09:17:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.315 ************************************ 00:11:34.315 START TEST nvme_err_injection 00:11:34.315 ************************************ 00:11:34.315 09:17:20 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:34.587 NVMe Error Injection test 00:11:34.587 Attached to 0000:00:10.0 00:11:34.587 Attached to 0000:00:11.0 00:11:34.587 Attached to 0000:00:13.0 00:11:34.587 Attached to 0000:00:12.0 00:11:34.587 0000:00:10.0: get features failed as expected 00:11:34.587 0000:00:11.0: get features failed as expected 00:11:34.587 0000:00:13.0: get features failed as expected 00:11:34.587 0000:00:12.0: get features failed as expected 00:11:34.587 0000:00:11.0: get features successfully as expected 00:11:34.587 0000:00:13.0: get features successfully as expected 00:11:34.587 0000:00:12.0: get features successfully as expected 00:11:34.587 0000:00:10.0: get features successfully as expected 00:11:34.587 0000:00:12.0: read failed as expected 00:11:34.587 0000:00:10.0: read failed as expected 00:11:34.587 0000:00:11.0: read failed as expected 00:11:34.587 0000:00:13.0: read failed as expected 00:11:34.587 0000:00:10.0: read successfully as expected 00:11:34.587 0000:00:11.0: read successfully as expected 00:11:34.587 0000:00:13.0: read successfully as expected 00:11:34.587 0000:00:12.0: read successfully as expected 00:11:34.587 Cleaning up... 00:11:34.587 00:11:34.587 real 0m0.264s 00:11:34.587 user 0m0.114s 00:11:34.587 sys 0m0.108s 00:11:34.587 ************************************ 00:11:34.587 END TEST nvme_err_injection 00:11:34.587 ************************************ 00:11:34.587 09:17:20 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.587 09:17:20 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:34.845 09:17:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:34.845 09:17:20 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:34.845 09:17:20 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:34.845 09:17:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.845 09:17:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:34.845 ************************************ 00:11:34.845 START TEST nvme_overhead 00:11:34.845 ************************************ 00:11:34.845 09:17:20 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:36.219 Initializing NVMe Controllers 00:11:36.219 Attached to 0000:00:10.0 00:11:36.219 Attached to 0000:00:11.0 00:11:36.219 Attached to 0000:00:13.0 00:11:36.219 Attached to 0000:00:12.0 00:11:36.219 Initialization complete. Launching workers. 00:11:36.219 submit (in ns) avg, min, max = 17253.5, 14373.6, 85494.5 00:11:36.219 complete (in ns) avg, min, max = 11258.9, 9897.7, 93627.7 00:11:36.219 00:11:36.219 Submit histogram 00:11:36.219 ================ 00:11:36.219 Range in us Cumulative Count 00:11:36.219 14.371 - 14.429: 0.0091% ( 1) 00:11:36.219 15.244 - 15.360: 0.0547% ( 5) 00:11:36.219 15.360 - 15.476: 0.7015% ( 71) 00:11:36.219 15.476 - 15.593: 5.2473% ( 499) 00:11:36.219 15.593 - 15.709: 18.5114% ( 1456) 00:11:36.219 15.709 - 15.825: 34.5905% ( 1765) 00:11:36.219 15.825 - 15.942: 45.9780% ( 1250) 00:11:36.220 15.942 - 16.058: 54.0038% ( 881) 00:11:36.220 16.058 - 16.175: 58.9323% ( 541) 00:11:36.220 16.175 - 16.291: 62.6036% ( 403) 00:11:36.220 16.291 - 16.407: 64.6443% ( 224) 00:11:36.220 16.407 - 16.524: 65.7921% ( 126) 00:11:36.220 16.524 - 16.640: 66.6667% ( 96) 00:11:36.220 16.640 - 16.756: 67.2770% ( 67) 00:11:36.220 16.756 - 16.873: 67.7143% ( 48) 00:11:36.220 16.873 - 16.989: 67.9785% ( 29) 00:11:36.220 16.989 - 17.105: 68.0878% ( 12) 00:11:36.220 17.105 - 17.222: 68.2154% ( 14) 00:11:36.220 17.222 - 17.338: 68.4158% ( 22) 00:11:36.220 17.338 - 17.455: 68.6253% ( 23) 00:11:36.220 17.455 - 17.571: 68.8531% ( 25) 00:11:36.220 17.571 - 17.687: 69.0079% ( 17) 00:11:36.220 17.687 - 17.804: 69.1172% ( 12) 00:11:36.220 17.804 - 17.920: 69.1628% ( 5) 00:11:36.220 17.920 - 18.036: 69.2266% ( 7) 00:11:36.220 18.036 - 18.153: 69.2630% ( 4) 00:11:36.220 18.153 - 18.269: 69.2994% ( 4) 00:11:36.220 18.269 - 18.385: 69.6547% ( 39) 00:11:36.220 18.385 - 18.502: 71.7774% ( 233) 00:11:36.220 18.502 - 18.618: 76.5783% ( 527) 00:11:36.220 18.618 - 18.735: 82.2356% ( 621) 00:11:36.220 18.735 - 18.851: 85.7885% ( 390) 00:11:36.220 18.851 - 18.967: 87.7562% ( 216) 00:11:36.220 18.967 - 19.084: 88.8585% ( 121) 00:11:36.220 19.084 - 19.200: 89.9062% ( 115) 00:11:36.220 19.200 - 19.316: 90.6805% ( 85) 00:11:36.220 19.316 - 19.433: 91.4002% ( 79) 00:11:36.220 19.433 - 19.549: 91.8010% ( 44) 00:11:36.220 19.549 - 19.665: 92.0561% ( 28) 00:11:36.220 19.665 - 19.782: 92.2110% ( 17) 00:11:36.220 19.782 - 19.898: 92.3750% ( 18) 00:11:36.220 19.898 - 20.015: 92.5572% ( 20) 00:11:36.220 20.015 - 20.131: 92.6756% ( 13) 00:11:36.220 20.131 - 20.247: 92.7940% ( 13) 00:11:36.220 20.247 - 20.364: 92.9398% ( 16) 00:11:36.220 20.364 - 20.480: 93.0947% ( 17) 00:11:36.220 20.480 - 20.596: 93.1949% ( 11) 00:11:36.220 20.596 - 20.713: 93.3133% ( 13) 00:11:36.220 20.713 - 20.829: 93.3953% ( 9) 00:11:36.220 20.829 - 20.945: 93.4773% ( 9) 00:11:36.220 20.945 - 21.062: 93.5684% ( 10) 00:11:36.220 21.062 - 21.178: 93.6139% ( 5) 00:11:36.220 21.178 - 21.295: 93.6777% ( 7) 00:11:36.220 21.295 - 21.411: 93.8052% ( 14) 00:11:36.220 21.411 - 21.527: 93.9145% ( 12) 00:11:36.220 21.527 - 21.644: 94.0421% ( 14) 00:11:36.220 21.644 - 21.760: 94.1605% ( 13) 00:11:36.220 21.760 - 21.876: 94.3063% ( 16) 00:11:36.220 21.876 - 21.993: 94.4338% ( 14) 00:11:36.220 21.993 - 22.109: 94.5796% ( 16) 00:11:36.220 22.109 - 22.225: 94.6798% ( 11) 00:11:36.220 22.225 - 22.342: 94.7891% ( 12) 00:11:36.220 22.342 - 22.458: 94.9986% ( 23) 00:11:36.220 22.458 - 22.575: 95.1353% ( 15) 00:11:36.220 22.575 - 22.691: 95.2993% ( 18) 00:11:36.220 22.691 - 22.807: 95.4086% ( 12) 00:11:36.220 22.807 - 22.924: 95.4997% ( 10) 00:11:36.220 22.924 - 23.040: 95.6181% ( 13) 00:11:36.220 23.040 - 23.156: 95.7001% ( 9) 00:11:36.220 23.156 - 23.273: 95.7639% ( 7) 00:11:36.220 23.273 - 23.389: 95.8641% ( 11) 00:11:36.220 23.389 - 23.505: 96.0007% ( 15) 00:11:36.220 23.505 - 23.622: 96.0918% ( 10) 00:11:36.220 23.622 - 23.738: 96.1829% ( 10) 00:11:36.220 23.738 - 23.855: 96.2649% ( 9) 00:11:36.220 23.855 - 23.971: 96.3287% ( 7) 00:11:36.220 23.971 - 24.087: 96.3925% ( 7) 00:11:36.220 24.087 - 24.204: 96.4562% ( 7) 00:11:36.220 24.204 - 24.320: 96.5382% ( 9) 00:11:36.220 24.320 - 24.436: 96.5473% ( 1) 00:11:36.220 24.436 - 24.553: 96.6566% ( 12) 00:11:36.220 24.553 - 24.669: 96.7295% ( 8) 00:11:36.220 24.669 - 24.785: 96.8206% ( 10) 00:11:36.220 24.785 - 24.902: 96.8571% ( 4) 00:11:36.220 24.902 - 25.018: 96.9482% ( 10) 00:11:36.220 25.018 - 25.135: 97.0210% ( 8) 00:11:36.220 25.135 - 25.251: 97.1030% ( 9) 00:11:36.220 25.251 - 25.367: 97.2124% ( 12) 00:11:36.220 25.367 - 25.484: 97.3399% ( 14) 00:11:36.220 25.484 - 25.600: 97.4492% ( 12) 00:11:36.220 25.600 - 25.716: 97.5585% ( 12) 00:11:36.220 25.716 - 25.833: 97.7225% ( 18) 00:11:36.220 25.833 - 25.949: 97.8045% ( 9) 00:11:36.220 25.949 - 26.065: 97.9138% ( 12) 00:11:36.220 26.065 - 26.182: 97.9411% ( 3) 00:11:36.220 26.182 - 26.298: 98.0049% ( 7) 00:11:36.220 26.298 - 26.415: 98.0596% ( 6) 00:11:36.220 26.415 - 26.531: 98.1142% ( 6) 00:11:36.220 26.531 - 26.647: 98.1780% ( 7) 00:11:36.220 26.647 - 26.764: 98.2691% ( 10) 00:11:36.220 26.764 - 26.880: 98.3420% ( 8) 00:11:36.220 26.880 - 26.996: 98.3966% ( 6) 00:11:36.220 26.996 - 27.113: 98.4969% ( 11) 00:11:36.220 27.113 - 27.229: 98.5424% ( 5) 00:11:36.220 27.229 - 27.345: 98.6062% ( 7) 00:11:36.220 27.345 - 27.462: 98.6244% ( 2) 00:11:36.220 27.462 - 27.578: 98.6973% ( 8) 00:11:36.220 27.578 - 27.695: 98.7519% ( 6) 00:11:36.220 27.695 - 27.811: 98.8066% ( 6) 00:11:36.220 27.811 - 27.927: 98.8704% ( 7) 00:11:36.220 27.927 - 28.044: 98.9159% ( 5) 00:11:36.220 28.044 - 28.160: 98.9615% ( 5) 00:11:36.220 28.276 - 28.393: 98.9797% ( 2) 00:11:36.220 28.393 - 28.509: 99.0343% ( 6) 00:11:36.220 28.509 - 28.625: 99.0708% ( 4) 00:11:36.220 28.625 - 28.742: 99.0799% ( 1) 00:11:36.220 28.742 - 28.858: 99.1072% ( 3) 00:11:36.220 28.858 - 28.975: 99.1254% ( 2) 00:11:36.220 28.975 - 29.091: 99.1710% ( 5) 00:11:36.220 29.091 - 29.207: 99.1983% ( 3) 00:11:36.220 29.207 - 29.324: 99.2074% ( 1) 00:11:36.220 29.324 - 29.440: 99.2803% ( 8) 00:11:36.220 29.440 - 29.556: 99.3259% ( 5) 00:11:36.220 29.556 - 29.673: 99.3532% ( 3) 00:11:36.220 29.673 - 29.789: 99.3714% ( 2) 00:11:36.220 29.789 - 30.022: 99.4079% ( 4) 00:11:36.220 30.022 - 30.255: 99.4443% ( 4) 00:11:36.220 30.255 - 30.487: 99.4716% ( 3) 00:11:36.220 30.487 - 30.720: 99.5081% ( 4) 00:11:36.220 30.720 - 30.953: 99.5445% ( 4) 00:11:36.220 30.953 - 31.185: 99.5901% ( 5) 00:11:36.220 31.418 - 31.651: 99.6083% ( 2) 00:11:36.220 31.884 - 32.116: 99.6265% ( 2) 00:11:36.220 32.116 - 32.349: 99.6356% ( 1) 00:11:36.220 32.349 - 32.582: 99.6447% ( 1) 00:11:36.220 32.582 - 32.815: 99.6812% ( 4) 00:11:36.220 32.815 - 33.047: 99.6994% ( 2) 00:11:36.220 33.047 - 33.280: 99.7085% ( 1) 00:11:36.220 33.280 - 33.513: 99.7267% ( 2) 00:11:36.220 33.513 - 33.745: 99.7358% ( 1) 00:11:36.220 33.745 - 33.978: 99.7449% ( 1) 00:11:36.220 34.676 - 34.909: 99.7540% ( 1) 00:11:36.220 34.909 - 35.142: 99.7723% ( 2) 00:11:36.220 35.375 - 35.607: 99.7814% ( 1) 00:11:36.220 35.607 - 35.840: 99.7905% ( 1) 00:11:36.220 35.840 - 36.073: 99.8178% ( 3) 00:11:36.220 36.073 - 36.305: 99.8269% ( 1) 00:11:36.220 36.771 - 37.004: 99.8542% ( 3) 00:11:36.220 37.004 - 37.236: 99.8634% ( 1) 00:11:36.220 37.236 - 37.469: 99.8725% ( 1) 00:11:36.220 37.935 - 38.167: 99.8816% ( 1) 00:11:36.220 39.796 - 40.029: 99.8907% ( 1) 00:11:36.220 40.727 - 40.960: 99.8998% ( 1) 00:11:36.220 41.193 - 41.425: 99.9089% ( 1) 00:11:36.220 45.847 - 46.080: 99.9180% ( 1) 00:11:36.220 46.080 - 46.313: 99.9271% ( 1) 00:11:36.220 46.545 - 46.778: 99.9362% ( 1) 00:11:36.220 47.476 - 47.709: 99.9453% ( 1) 00:11:36.220 48.407 - 48.640: 99.9545% ( 1) 00:11:36.220 49.804 - 50.036: 99.9636% ( 1) 00:11:36.220 50.735 - 50.967: 99.9727% ( 1) 00:11:36.220 57.484 - 57.716: 99.9818% ( 1) 00:11:36.220 58.647 - 58.880: 99.9909% ( 1) 00:11:36.220 85.178 - 85.644: 100.0000% ( 1) 00:11:36.220 00:11:36.220 Complete histogram 00:11:36.220 ================== 00:11:36.220 Range in us Cumulative Count 00:11:36.220 9.891 - 9.949: 0.0638% ( 7) 00:11:36.220 9.949 - 10.007: 0.5648% ( 55) 00:11:36.220 10.007 - 10.065: 2.8423% ( 250) 00:11:36.220 10.065 - 10.124: 10.6313% ( 855) 00:11:36.220 10.124 - 10.182: 24.3600% ( 1507) 00:11:36.220 10.182 - 10.240: 39.6739% ( 1681) 00:11:36.220 10.240 - 10.298: 51.0340% ( 1247) 00:11:36.220 10.298 - 10.356: 57.6478% ( 726) 00:11:36.220 10.356 - 10.415: 61.0458% ( 373) 00:11:36.220 10.415 - 10.473: 62.6947% ( 181) 00:11:36.220 10.473 - 10.531: 63.5875% ( 98) 00:11:36.220 10.531 - 10.589: 64.0521% ( 51) 00:11:36.220 10.589 - 10.647: 64.4165% ( 40) 00:11:36.220 10.647 - 10.705: 64.6989% ( 31) 00:11:36.220 10.705 - 10.764: 64.9904% ( 32) 00:11:36.220 10.764 - 10.822: 65.1817% ( 21) 00:11:36.220 10.822 - 10.880: 65.4095% ( 25) 00:11:36.220 10.880 - 10.938: 65.5644% ( 17) 00:11:36.220 10.938 - 10.996: 65.7648% ( 22) 00:11:36.220 10.996 - 11.055: 66.0016% ( 26) 00:11:36.220 11.055 - 11.113: 66.5118% ( 56) 00:11:36.220 11.113 - 11.171: 67.0766% ( 62) 00:11:36.220 11.171 - 11.229: 67.7416% ( 73) 00:11:36.220 11.229 - 11.287: 68.3520% ( 67) 00:11:36.220 11.287 - 11.345: 68.8348% ( 53) 00:11:36.220 11.345 - 11.404: 69.3268% ( 54) 00:11:36.220 11.404 - 11.462: 69.6547% ( 36) 00:11:36.220 11.462 - 11.520: 69.8187% ( 18) 00:11:36.220 11.520 - 11.578: 69.9736% ( 17) 00:11:36.220 11.578 - 11.636: 70.1285% ( 17) 00:11:36.220 11.636 - 11.695: 70.2560% ( 14) 00:11:36.220 11.695 - 11.753: 70.3835% ( 14) 00:11:36.220 11.753 - 11.811: 70.4564% ( 8) 00:11:36.220 11.811 - 11.869: 70.4746% ( 2) 00:11:36.220 11.869 - 11.927: 70.5111% ( 4) 00:11:36.221 11.927 - 11.985: 70.5384% ( 3) 00:11:36.221 11.985 - 12.044: 70.5748% ( 4) 00:11:36.221 12.044 - 12.102: 70.7388% ( 18) 00:11:36.221 12.102 - 12.160: 71.3947% ( 72) 00:11:36.221 12.160 - 12.218: 73.5993% ( 242) 00:11:36.221 12.218 - 12.276: 77.9084% ( 473) 00:11:36.221 12.276 - 12.335: 83.0281% ( 562) 00:11:36.221 12.335 - 12.393: 86.9272% ( 428) 00:11:36.221 12.393 - 12.451: 89.0863% ( 237) 00:11:36.221 12.451 - 12.509: 90.2250% ( 125) 00:11:36.221 12.509 - 12.567: 91.0085% ( 86) 00:11:36.221 12.567 - 12.625: 91.5277% ( 57) 00:11:36.221 12.625 - 12.684: 91.8648% ( 37) 00:11:36.221 12.684 - 12.742: 92.0652% ( 22) 00:11:36.221 12.742 - 12.800: 92.3567% ( 32) 00:11:36.221 12.800 - 12.858: 92.5207% ( 18) 00:11:36.221 12.858 - 12.916: 92.6483% ( 14) 00:11:36.221 12.916 - 12.975: 92.7940% ( 16) 00:11:36.221 12.975 - 13.033: 92.9216% ( 14) 00:11:36.221 13.033 - 13.091: 93.0218% ( 11) 00:11:36.221 13.091 - 13.149: 93.1584% ( 15) 00:11:36.221 13.149 - 13.207: 93.3862% ( 25) 00:11:36.221 13.207 - 13.265: 93.6139% ( 25) 00:11:36.221 13.265 - 13.324: 93.9328% ( 35) 00:11:36.221 13.324 - 13.382: 94.1878% ( 28) 00:11:36.221 13.382 - 13.440: 94.5340% ( 38) 00:11:36.221 13.440 - 13.498: 94.8711% ( 37) 00:11:36.221 13.498 - 13.556: 95.0442% ( 19) 00:11:36.221 13.556 - 13.615: 95.2173% ( 19) 00:11:36.221 13.615 - 13.673: 95.2810% ( 7) 00:11:36.221 13.673 - 13.731: 95.3539% ( 8) 00:11:36.221 13.731 - 13.789: 95.4359% ( 9) 00:11:36.221 13.789 - 13.847: 95.5270% ( 10) 00:11:36.221 13.847 - 13.905: 95.5817% ( 6) 00:11:36.221 13.905 - 13.964: 95.6363% ( 6) 00:11:36.221 13.964 - 14.022: 95.6728% ( 4) 00:11:36.221 14.022 - 14.080: 95.7001% ( 3) 00:11:36.221 14.080 - 14.138: 95.7365% ( 4) 00:11:36.221 14.138 - 14.196: 95.7548% ( 2) 00:11:36.221 14.196 - 14.255: 95.7821% ( 3) 00:11:36.221 14.255 - 14.313: 95.8094% ( 3) 00:11:36.221 14.313 - 14.371: 95.8459% ( 4) 00:11:36.221 14.371 - 14.429: 95.8550% ( 1) 00:11:36.221 14.429 - 14.487: 95.9005% ( 5) 00:11:36.221 14.487 - 14.545: 95.9278% ( 3) 00:11:36.221 14.545 - 14.604: 95.9552% ( 3) 00:11:36.221 14.604 - 14.662: 95.9734% ( 2) 00:11:36.221 14.662 - 14.720: 95.9825% ( 1) 00:11:36.221 14.720 - 14.778: 96.0281% ( 5) 00:11:36.221 14.778 - 14.836: 96.0827% ( 6) 00:11:36.221 14.836 - 14.895: 96.1009% ( 2) 00:11:36.221 14.895 - 15.011: 96.1920% ( 10) 00:11:36.221 15.011 - 15.127: 96.2558% ( 7) 00:11:36.221 15.127 - 15.244: 96.3014% ( 5) 00:11:36.221 15.244 - 15.360: 96.3378% ( 4) 00:11:36.221 15.360 - 15.476: 96.3833% ( 5) 00:11:36.221 15.476 - 15.593: 96.4380% ( 6) 00:11:36.221 15.593 - 15.709: 96.5109% ( 8) 00:11:36.221 15.709 - 15.825: 96.5564% ( 5) 00:11:36.221 15.825 - 15.942: 96.5838% ( 3) 00:11:36.221 15.942 - 16.058: 96.6384% ( 6) 00:11:36.221 16.058 - 16.175: 96.6840% ( 5) 00:11:36.221 16.175 - 16.291: 96.7751% ( 10) 00:11:36.221 16.291 - 16.407: 96.8571% ( 9) 00:11:36.221 16.407 - 16.524: 96.8935% ( 4) 00:11:36.221 16.524 - 16.640: 96.9299% ( 4) 00:11:36.221 16.640 - 16.756: 96.9664% ( 4) 00:11:36.221 16.756 - 16.873: 97.0393% ( 8) 00:11:36.221 16.873 - 16.989: 97.1395% ( 11) 00:11:36.221 16.989 - 17.105: 97.2761% ( 15) 00:11:36.221 17.105 - 17.222: 97.3399% ( 7) 00:11:36.221 17.222 - 17.338: 97.4310% ( 10) 00:11:36.221 17.338 - 17.455: 97.4674% ( 4) 00:11:36.221 17.455 - 17.571: 97.5039% ( 4) 00:11:36.221 17.571 - 17.687: 97.5768% ( 8) 00:11:36.221 17.687 - 17.804: 97.6314% ( 6) 00:11:36.221 17.804 - 17.920: 97.6679% ( 4) 00:11:36.221 17.920 - 18.036: 97.7498% ( 9) 00:11:36.221 18.036 - 18.153: 97.8409% ( 10) 00:11:36.221 18.153 - 18.269: 97.8956% ( 6) 00:11:36.221 18.269 - 18.385: 97.9958% ( 11) 00:11:36.221 18.385 - 18.502: 98.0049% ( 1) 00:11:36.221 18.502 - 18.618: 98.0687% ( 7) 00:11:36.221 18.618 - 18.735: 98.1780% ( 12) 00:11:36.221 18.735 - 18.851: 98.2418% ( 7) 00:11:36.221 18.851 - 18.967: 98.3055% ( 7) 00:11:36.221 18.967 - 19.084: 98.3329% ( 3) 00:11:36.221 19.084 - 19.200: 98.3966% ( 7) 00:11:36.221 19.200 - 19.316: 98.4331% ( 4) 00:11:36.221 19.316 - 19.433: 98.4877% ( 6) 00:11:36.221 19.433 - 19.549: 98.5242% ( 4) 00:11:36.221 19.549 - 19.665: 98.5333% ( 1) 00:11:36.221 19.665 - 19.782: 98.5606% ( 3) 00:11:36.221 19.782 - 19.898: 98.5788% ( 2) 00:11:36.221 19.898 - 20.015: 98.5880% ( 1) 00:11:36.221 20.015 - 20.131: 98.6426% ( 6) 00:11:36.221 20.131 - 20.247: 98.6608% ( 2) 00:11:36.221 20.247 - 20.364: 98.6973% ( 4) 00:11:36.221 20.364 - 20.480: 98.7428% ( 5) 00:11:36.221 20.480 - 20.596: 98.7702% ( 3) 00:11:36.221 20.596 - 20.713: 98.8430% ( 8) 00:11:36.221 20.713 - 20.829: 98.8613% ( 2) 00:11:36.221 20.829 - 20.945: 98.8795% ( 2) 00:11:36.221 20.945 - 21.062: 98.9432% ( 7) 00:11:36.221 21.062 - 21.178: 98.9888% ( 5) 00:11:36.221 21.178 - 21.295: 99.0343% ( 5) 00:11:36.221 21.295 - 21.411: 99.0617% ( 3) 00:11:36.221 21.411 - 21.527: 99.0890% ( 3) 00:11:36.221 21.527 - 21.644: 99.1346% ( 5) 00:11:36.221 21.644 - 21.760: 99.1437% ( 1) 00:11:36.221 21.760 - 21.876: 99.1710% ( 3) 00:11:36.221 21.876 - 21.993: 99.2074% ( 4) 00:11:36.221 21.993 - 22.109: 99.2439% ( 4) 00:11:36.221 22.109 - 22.225: 99.2894% ( 5) 00:11:36.221 22.225 - 22.342: 99.3441% ( 6) 00:11:36.221 22.342 - 22.458: 99.3623% ( 2) 00:11:36.221 22.458 - 22.575: 99.3896% ( 3) 00:11:36.221 22.575 - 22.691: 99.4443% ( 6) 00:11:36.221 22.807 - 22.924: 99.4534% ( 1) 00:11:36.221 22.924 - 23.040: 99.4625% ( 1) 00:11:36.221 23.040 - 23.156: 99.4716% ( 1) 00:11:36.221 23.273 - 23.389: 99.4990% ( 3) 00:11:36.221 23.389 - 23.505: 99.5172% ( 2) 00:11:36.221 23.505 - 23.622: 99.5263% ( 1) 00:11:36.221 23.738 - 23.855: 99.5354% ( 1) 00:11:36.221 23.855 - 23.971: 99.5627% ( 3) 00:11:36.221 23.971 - 24.087: 99.5718% ( 1) 00:11:36.221 24.087 - 24.204: 99.5809% ( 1) 00:11:36.221 24.204 - 24.320: 99.5992% ( 2) 00:11:36.221 24.320 - 24.436: 99.6265% ( 3) 00:11:36.221 24.669 - 24.785: 99.6356% ( 1) 00:11:36.221 24.785 - 24.902: 99.6538% ( 2) 00:11:36.221 24.902 - 25.018: 99.6629% ( 1) 00:11:36.221 25.135 - 25.251: 99.6720% ( 1) 00:11:36.221 25.367 - 25.484: 99.6812% ( 1) 00:11:36.221 25.600 - 25.716: 99.6994% ( 2) 00:11:36.221 25.716 - 25.833: 99.7085% ( 1) 00:11:36.221 25.833 - 25.949: 99.7176% ( 1) 00:11:36.221 26.298 - 26.415: 99.7267% ( 1) 00:11:36.221 26.647 - 26.764: 99.7540% ( 3) 00:11:36.221 26.764 - 26.880: 99.7723% ( 2) 00:11:36.221 26.996 - 27.113: 99.7814% ( 1) 00:11:36.221 28.160 - 28.276: 99.7905% ( 1) 00:11:36.221 28.625 - 28.742: 99.7996% ( 1) 00:11:36.221 28.858 - 28.975: 99.8087% ( 1) 00:11:36.221 28.975 - 29.091: 99.8178% ( 1) 00:11:36.221 29.207 - 29.324: 99.8269% ( 1) 00:11:36.221 29.440 - 29.556: 99.8360% ( 1) 00:11:36.221 30.953 - 31.185: 99.8542% ( 2) 00:11:36.221 31.884 - 32.116: 99.8634% ( 1) 00:11:36.221 33.513 - 33.745: 99.8725% ( 1) 00:11:36.221 33.745 - 33.978: 99.8816% ( 1) 00:11:36.221 36.538 - 36.771: 99.8907% ( 1) 00:11:36.221 37.935 - 38.167: 99.8998% ( 1) 00:11:36.221 38.167 - 38.400: 99.9089% ( 1) 00:11:36.221 38.400 - 38.633: 99.9180% ( 1) 00:11:36.221 41.658 - 41.891: 99.9271% ( 1) 00:11:36.221 42.124 - 42.356: 99.9362% ( 1) 00:11:36.221 42.589 - 42.822: 99.9453% ( 1) 00:11:36.221 42.822 - 43.055: 99.9545% ( 1) 00:11:36.221 49.804 - 50.036: 99.9636% ( 1) 00:11:36.221 82.851 - 83.316: 99.9727% ( 1) 00:11:36.221 92.160 - 92.625: 99.9818% ( 1) 00:11:36.221 92.625 - 93.091: 99.9909% ( 1) 00:11:36.221 93.556 - 94.022: 100.0000% ( 1) 00:11:36.221 00:11:36.221 ************************************ 00:11:36.221 END TEST nvme_overhead 00:11:36.221 ************************************ 00:11:36.221 00:11:36.221 real 0m1.285s 00:11:36.221 user 0m1.107s 00:11:36.221 sys 0m0.129s 00:11:36.221 09:17:22 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.221 09:17:22 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:36.221 09:17:22 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:36.221 09:17:22 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:36.221 09:17:22 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:36.221 09:17:22 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.221 09:17:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.221 ************************************ 00:11:36.221 START TEST nvme_arbitration 00:11:36.221 ************************************ 00:11:36.221 09:17:22 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:39.504 Initializing NVMe Controllers 00:11:39.504 Attached to 0000:00:10.0 00:11:39.504 Attached to 0000:00:11.0 00:11:39.504 Attached to 0000:00:13.0 00:11:39.504 Attached to 0000:00:12.0 00:11:39.504 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:39.504 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:39.504 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:39.504 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:39.505 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:39.505 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:39.505 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:39.505 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:39.505 Initialization complete. Launching workers. 00:11:39.505 Starting thread on core 1 with urgent priority queue 00:11:39.505 Starting thread on core 2 with urgent priority queue 00:11:39.505 Starting thread on core 3 with urgent priority queue 00:11:39.505 Starting thread on core 0 with urgent priority queue 00:11:39.505 QEMU NVMe Ctrl (12340 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:39.505 QEMU NVMe Ctrl (12342 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:11:39.505 QEMU NVMe Ctrl (12341 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:11:39.505 QEMU NVMe Ctrl (12342 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:11:39.505 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:11:39.505 QEMU NVMe Ctrl (12342 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 00:11:39.505 ======================================================== 00:11:39.505 00:11:39.505 ************************************ 00:11:39.505 END TEST nvme_arbitration 00:11:39.505 ************************************ 00:11:39.505 00:11:39.505 real 0m3.426s 00:11:39.505 user 0m9.370s 00:11:39.505 sys 0m0.159s 00:11:39.505 09:17:25 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.505 09:17:25 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:39.505 09:17:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:39.505 09:17:25 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:39.505 09:17:25 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:39.505 09:17:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.505 09:17:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.505 ************************************ 00:11:39.505 START TEST nvme_single_aen 00:11:39.505 ************************************ 00:11:39.505 09:17:25 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:39.764 Asynchronous Event Request test 00:11:39.764 Attached to 0000:00:10.0 00:11:39.764 Attached to 0000:00:11.0 00:11:39.764 Attached to 0000:00:13.0 00:11:39.764 Attached to 0000:00:12.0 00:11:39.764 Reset controller to setup AER completions for this process 00:11:39.764 Registering asynchronous event callbacks... 00:11:39.764 Getting orig temperature thresholds of all controllers 00:11:39.764 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.764 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.764 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.764 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:39.764 Setting all controllers temperature threshold low to trigger AER 00:11:39.764 Waiting for all controllers temperature threshold to be set lower 00:11:39.764 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.764 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:39.764 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.764 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:39.764 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.764 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:39.764 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:39.764 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:39.764 Waiting for all controllers to trigger AER and reset threshold 00:11:39.764 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.764 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.764 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.764 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.764 Cleaning up... 00:11:39.764 00:11:39.764 real 0m0.276s 00:11:39.764 user 0m0.112s 00:11:39.764 sys 0m0.123s 00:11:39.764 09:17:26 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.764 ************************************ 00:11:39.764 END TEST nvme_single_aen 00:11:39.764 ************************************ 00:11:39.764 09:17:26 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:39.764 09:17:26 nvme -- common/autotest_common.sh@1142 -- # return 0 00:11:39.764 09:17:26 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:39.764 09:17:26 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.764 09:17:26 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.764 09:17:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.764 ************************************ 00:11:39.764 START TEST nvme_doorbell_aers 00:11:39.764 ************************************ 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:39.764 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:40.022 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:40.022 09:17:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:40.022 09:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:40.022 09:17:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:40.280 [2024-07-12 09:17:26.403256] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:11:50.271 Executing: test_write_invalid_db 00:11:50.271 Waiting for AER completion... 00:11:50.271 Failure: test_write_invalid_db 00:11:50.271 00:11:50.271 Executing: test_invalid_db_write_overflow_sq 00:11:50.271 Waiting for AER completion... 00:11:50.271 Failure: test_invalid_db_write_overflow_sq 00:11:50.271 00:11:50.271 Executing: test_invalid_db_write_overflow_cq 00:11:50.271 Waiting for AER completion... 00:11:50.271 Failure: test_invalid_db_write_overflow_cq 00:11:50.271 00:11:50.271 09:17:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:50.271 09:17:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:50.271 [2024-07-12 09:17:36.469985] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:00.244 Executing: test_write_invalid_db 00:12:00.244 Waiting for AER completion... 00:12:00.244 Failure: test_write_invalid_db 00:12:00.244 00:12:00.244 Executing: test_invalid_db_write_overflow_sq 00:12:00.244 Waiting for AER completion... 00:12:00.244 Failure: test_invalid_db_write_overflow_sq 00:12:00.244 00:12:00.244 Executing: test_invalid_db_write_overflow_cq 00:12:00.244 Waiting for AER completion... 00:12:00.244 Failure: test_invalid_db_write_overflow_cq 00:12:00.244 00:12:00.244 09:17:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:00.244 09:17:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:00.244 [2024-07-12 09:17:46.569495] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:10.310 Executing: test_write_invalid_db 00:12:10.310 Waiting for AER completion... 00:12:10.310 Failure: test_write_invalid_db 00:12:10.310 00:12:10.310 Executing: test_invalid_db_write_overflow_sq 00:12:10.310 Waiting for AER completion... 00:12:10.310 Failure: test_invalid_db_write_overflow_sq 00:12:10.310 00:12:10.310 Executing: test_invalid_db_write_overflow_cq 00:12:10.310 Waiting for AER completion... 00:12:10.310 Failure: test_invalid_db_write_overflow_cq 00:12:10.310 00:12:10.310 09:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:10.310 09:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:10.310 [2024-07-12 09:17:56.518889] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.272 Executing: test_write_invalid_db 00:12:20.272 Waiting for AER completion... 00:12:20.272 Failure: test_write_invalid_db 00:12:20.272 00:12:20.272 Executing: test_invalid_db_write_overflow_sq 00:12:20.272 Waiting for AER completion... 00:12:20.272 Failure: test_invalid_db_write_overflow_sq 00:12:20.272 00:12:20.272 Executing: test_invalid_db_write_overflow_cq 00:12:20.272 Waiting for AER completion... 00:12:20.272 Failure: test_invalid_db_write_overflow_cq 00:12:20.272 00:12:20.272 00:12:20.272 real 0m40.249s 00:12:20.272 user 0m34.036s 00:12:20.272 sys 0m5.855s 00:12:20.272 09:18:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.272 ************************************ 00:12:20.272 END TEST nvme_doorbell_aers 00:12:20.272 09:18:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:20.272 ************************************ 00:12:20.272 09:18:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:20.272 09:18:06 nvme -- nvme/nvme.sh@97 -- # uname 00:12:20.272 09:18:06 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:20.272 09:18:06 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:20.272 09:18:06 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:20.272 09:18:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.272 09:18:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.272 ************************************ 00:12:20.272 START TEST nvme_multi_aen 00:12:20.272 ************************************ 00:12:20.272 09:18:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:20.530 [2024-07-12 09:18:06.630844] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.630976] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.631020] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.632890] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.632954] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.632978] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.634647] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.634706] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.634743] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.636393] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.636448] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 [2024-07-12 09:18:06.636471] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70132) is not found. Dropping the request. 00:12:20.530 Child process pid: 70653 00:12:20.788 [Child] Asynchronous Event Request test 00:12:20.788 [Child] Attached to 0000:00:10.0 00:12:20.788 [Child] Attached to 0000:00:11.0 00:12:20.788 [Child] Attached to 0000:00:13.0 00:12:20.788 [Child] Attached to 0000:00:12.0 00:12:20.788 [Child] Registering asynchronous event callbacks... 00:12:20.788 [Child] Getting orig temperature thresholds of all controllers 00:12:20.788 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:20.788 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 [Child] Cleaning up... 00:12:20.788 Asynchronous Event Request test 00:12:20.788 Attached to 0000:00:10.0 00:12:20.788 Attached to 0000:00:11.0 00:12:20.788 Attached to 0000:00:13.0 00:12:20.788 Attached to 0000:00:12.0 00:12:20.788 Reset controller to setup AER completions for this process 00:12:20.788 Registering asynchronous event callbacks... 00:12:20.788 Getting orig temperature thresholds of all controllers 00:12:20.788 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:20.788 Setting all controllers temperature threshold low to trigger AER 00:12:20.788 Waiting for all controllers temperature threshold to be set lower 00:12:20.788 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:20.788 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:20.788 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:20.788 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:20.788 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:20.788 Waiting for all controllers to trigger AER and reset threshold 00:12:20.788 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.788 Cleaning up... 00:12:20.788 00:12:20.788 real 0m0.585s 00:12:20.788 user 0m0.217s 00:12:20.788 sys 0m0.273s 00:12:20.788 09:18:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.788 09:18:06 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:20.788 ************************************ 00:12:20.788 END TEST nvme_multi_aen 00:12:20.788 ************************************ 00:12:20.788 09:18:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:20.788 09:18:06 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:20.788 09:18:06 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:20.788 09:18:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.788 09:18:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.788 ************************************ 00:12:20.788 START TEST nvme_startup 00:12:20.788 ************************************ 00:12:20.788 09:18:06 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:21.045 Initializing NVMe Controllers 00:12:21.045 Attached to 0000:00:10.0 00:12:21.045 Attached to 0000:00:11.0 00:12:21.045 Attached to 0000:00:13.0 00:12:21.045 Attached to 0000:00:12.0 00:12:21.045 Initialization complete. 00:12:21.045 Time used:187478.953 (us). 00:12:21.045 00:12:21.045 real 0m0.283s 00:12:21.045 user 0m0.105s 00:12:21.045 sys 0m0.136s 00:12:21.045 09:18:07 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.045 ************************************ 00:12:21.045 END TEST nvme_startup 00:12:21.045 ************************************ 00:12:21.045 09:18:07 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:21.045 09:18:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:21.045 09:18:07 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:21.045 09:18:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:21.046 09:18:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.046 09:18:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.046 ************************************ 00:12:21.046 START TEST nvme_multi_secondary 00:12:21.046 ************************************ 00:12:21.046 09:18:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:12:21.046 09:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70704 00:12:21.046 09:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:21.046 09:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70705 00:12:21.046 09:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:21.046 09:18:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:24.339 Initializing NVMe Controllers 00:12:24.339 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:24.339 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:24.339 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:24.339 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:24.339 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:24.339 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:24.339 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:24.339 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:24.339 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:24.339 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:24.339 Initialization complete. Launching workers. 00:12:24.339 ======================================================== 00:12:24.339 Latency(us) 00:12:24.339 Device Information : IOPS MiB/s Average min max 00:12:24.339 PCIE (0000:00:10.0) NSID 1 from core 2: 2244.94 8.77 7124.70 2025.89 17015.56 00:12:24.339 PCIE (0000:00:11.0) NSID 1 from core 2: 2244.94 8.77 7118.15 2119.18 21980.53 00:12:24.339 PCIE (0000:00:13.0) NSID 1 from core 2: 2244.94 8.77 7117.57 1735.17 17367.60 00:12:24.339 PCIE (0000:00:12.0) NSID 1 from core 2: 2244.94 8.77 7117.43 1792.97 17584.73 00:12:24.339 PCIE (0000:00:12.0) NSID 2 from core 2: 2244.94 8.77 7117.41 1738.84 17323.18 00:12:24.339 PCIE (0000:00:12.0) NSID 3 from core 2: 2244.94 8.77 7117.36 1594.42 17000.13 00:12:24.339 ======================================================== 00:12:24.339 Total : 13469.65 52.62 7118.77 1594.42 21980.53 00:12:24.339 00:12:24.597 09:18:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70704 00:12:24.597 Initializing NVMe Controllers 00:12:24.597 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:24.597 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:24.597 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:24.597 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:24.597 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:24.597 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:24.597 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:24.597 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:24.597 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:24.597 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:24.597 Initialization complete. Launching workers. 00:12:24.597 ======================================================== 00:12:24.597 Latency(us) 00:12:24.597 Device Information : IOPS MiB/s Average min max 00:12:24.597 PCIE (0000:00:10.0) NSID 1 from core 1: 5283.39 20.64 3026.51 1485.12 7514.16 00:12:24.597 PCIE (0000:00:11.0) NSID 1 from core 1: 5283.39 20.64 3027.68 1419.28 8065.26 00:12:24.597 PCIE (0000:00:13.0) NSID 1 from core 1: 5283.39 20.64 3027.64 1323.39 6325.72 00:12:24.597 PCIE (0000:00:12.0) NSID 1 from core 1: 5283.39 20.64 3027.60 1279.88 7027.81 00:12:24.597 PCIE (0000:00:12.0) NSID 2 from core 1: 5283.39 20.64 3027.51 1238.42 7162.09 00:12:24.597 PCIE (0000:00:12.0) NSID 3 from core 1: 5283.39 20.64 3027.48 1112.79 7161.15 00:12:24.597 ======================================================== 00:12:24.597 Total : 31700.33 123.83 3027.40 1112.79 8065.26 00:12:24.597 00:12:26.493 Initializing NVMe Controllers 00:12:26.493 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:26.493 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:26.493 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:26.493 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:26.493 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:26.493 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:26.493 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:26.493 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:26.493 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:26.493 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:26.493 Initialization complete. Launching workers. 00:12:26.493 ======================================================== 00:12:26.493 Latency(us) 00:12:26.493 Device Information : IOPS MiB/s Average min max 00:12:26.493 PCIE (0000:00:10.0) NSID 1 from core 0: 7893.16 30.83 2025.58 975.28 8537.23 00:12:26.493 PCIE (0000:00:11.0) NSID 1 from core 0: 7893.16 30.83 2026.60 985.13 8913.30 00:12:26.493 PCIE (0000:00:13.0) NSID 1 from core 0: 7893.16 30.83 2026.55 933.93 8682.36 00:12:26.493 PCIE (0000:00:12.0) NSID 1 from core 0: 7893.16 30.83 2026.52 879.94 8078.60 00:12:26.493 PCIE (0000:00:12.0) NSID 2 from core 0: 7893.16 30.83 2026.49 842.83 8001.96 00:12:26.493 PCIE (0000:00:12.0) NSID 3 from core 0: 7893.16 30.83 2026.46 795.04 8196.23 00:12:26.493 ======================================================== 00:12:26.493 Total : 47358.96 185.00 2026.37 795.04 8913.30 00:12:26.493 00:12:26.493 09:18:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70705 00:12:26.493 09:18:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70774 00:12:26.493 09:18:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:26.493 09:18:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70775 00:12:26.493 09:18:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:26.493 09:18:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:29.774 Initializing NVMe Controllers 00:12:29.774 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:29.774 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:29.774 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:29.774 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:29.774 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:29.774 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:29.774 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:29.774 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:29.774 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:29.774 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:29.774 Initialization complete. Launching workers. 00:12:29.774 ======================================================== 00:12:29.775 Latency(us) 00:12:29.775 Device Information : IOPS MiB/s Average min max 00:12:29.775 PCIE (0000:00:10.0) NSID 1 from core 1: 5703.96 22.28 2803.25 971.68 15168.88 00:12:29.775 PCIE (0000:00:11.0) NSID 1 from core 1: 5703.96 22.28 2804.57 1006.13 15167.12 00:12:29.775 PCIE (0000:00:13.0) NSID 1 from core 1: 5703.96 22.28 2804.50 1027.73 15382.50 00:12:29.775 PCIE (0000:00:12.0) NSID 1 from core 1: 5703.96 22.28 2804.47 1031.82 15381.69 00:12:29.775 PCIE (0000:00:12.0) NSID 2 from core 1: 5703.96 22.28 2804.41 1015.01 15388.18 00:12:29.775 PCIE (0000:00:12.0) NSID 3 from core 1: 5703.96 22.28 2804.35 1014.90 15217.47 00:12:29.775 ======================================================== 00:12:29.775 Total : 34223.76 133.69 2804.26 971.68 15388.18 00:12:29.775 00:12:30.034 Initializing NVMe Controllers 00:12:30.034 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:30.034 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:30.034 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:30.034 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:30.034 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:30.034 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:30.034 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:30.034 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:30.034 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:30.034 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:30.034 Initialization complete. Launching workers. 00:12:30.034 ======================================================== 00:12:30.034 Latency(us) 00:12:30.034 Device Information : IOPS MiB/s Average min max 00:12:30.034 PCIE (0000:00:10.0) NSID 1 from core 0: 5524.22 21.58 2894.49 1008.26 15256.84 00:12:30.034 PCIE (0000:00:11.0) NSID 1 from core 0: 5524.22 21.58 2895.65 1033.56 15332.61 00:12:30.034 PCIE (0000:00:13.0) NSID 1 from core 0: 5524.22 21.58 2895.51 957.32 15013.52 00:12:30.034 PCIE (0000:00:12.0) NSID 1 from core 0: 5524.22 21.58 2895.35 861.78 15151.59 00:12:30.034 PCIE (0000:00:12.0) NSID 2 from core 0: 5524.22 21.58 2895.22 802.95 15401.99 00:12:30.034 PCIE (0000:00:12.0) NSID 3 from core 0: 5524.22 21.58 2895.09 752.32 15485.79 00:12:30.034 ======================================================== 00:12:30.034 Total : 33145.30 129.47 2895.22 752.32 15485.79 00:12:30.034 00:12:31.938 Initializing NVMe Controllers 00:12:31.938 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:31.938 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:31.938 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:31.938 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:31.938 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:31.938 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:31.938 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:31.938 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:31.938 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:31.938 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:31.938 Initialization complete. Launching workers. 00:12:31.938 ======================================================== 00:12:31.938 Latency(us) 00:12:31.938 Device Information : IOPS MiB/s Average min max 00:12:31.938 PCIE (0000:00:10.0) NSID 1 from core 2: 3500.38 13.67 4569.19 1097.07 31555.22 00:12:31.938 PCIE (0000:00:11.0) NSID 1 from core 2: 3500.38 13.67 4569.75 1003.78 29685.15 00:12:31.938 PCIE (0000:00:13.0) NSID 1 from core 2: 3500.38 13.67 4569.20 1099.47 31220.35 00:12:31.938 PCIE (0000:00:12.0) NSID 1 from core 2: 3500.38 13.67 4570.02 1103.24 31621.81 00:12:31.938 PCIE (0000:00:12.0) NSID 2 from core 2: 3500.38 13.67 4569.70 920.96 31764.63 00:12:31.938 PCIE (0000:00:12.0) NSID 3 from core 2: 3500.38 13.67 4570.07 793.02 31487.20 00:12:31.938 ======================================================== 00:12:31.938 Total : 21002.27 82.04 4569.65 793.02 31764.63 00:12:31.938 00:12:31.938 ************************************ 00:12:31.938 END TEST nvme_multi_secondary 00:12:31.938 ************************************ 00:12:31.938 09:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70774 00:12:31.938 09:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70775 00:12:31.938 00:12:31.938 real 0m10.903s 00:12:31.938 user 0m18.583s 00:12:31.938 sys 0m0.881s 00:12:31.938 09:18:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.938 09:18:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:31.938 09:18:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:31.938 09:18:18 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:31.938 09:18:18 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:31.938 09:18:18 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69716 ]] 00:12:31.938 09:18:18 nvme -- common/autotest_common.sh@1088 -- # kill 69716 00:12:31.938 09:18:18 nvme -- common/autotest_common.sh@1089 -- # wait 69716 00:12:31.938 [2024-07-12 09:18:18.272514] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.272608] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.272641] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.272667] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.275422] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.275496] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.275526] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.275574] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.278228] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.278308] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.278340] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.278367] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.281982] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.282100] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.282152] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:31.938 [2024-07-12 09:18:18.282478] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70652) is not found. Dropping the request. 00:12:32.506 09:18:18 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:12:32.506 09:18:18 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:12:32.506 09:18:18 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:32.506 09:18:18 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:32.506 09:18:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.506 09:18:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.506 ************************************ 00:12:32.506 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:32.506 ************************************ 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:32.506 * Looking for test storage... 00:12:32.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:12:32.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70934 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70934 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 70934 ']' 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.506 09:18:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:32.765 [2024-07-12 09:18:18.879312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:32.765 [2024-07-12 09:18:18.879714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70934 ] 00:12:32.765 [2024-07-12 09:18:19.083038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.023 [2024-07-12 09:18:19.300922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.023 [2024-07-12 09:18:19.301082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.023 [2024-07-12 09:18:19.301163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.023 [2024-07-12 09:18:19.301180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:33.960 nvme0n1 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_nzqla.txt 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:33.960 true 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:33.960 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720775900 00:12:33.961 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70963 00:12:33.961 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:33.961 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:33.961 09:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:35.863 [2024-07-12 09:18:22.103678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:35.863 [2024-07-12 09:18:22.104016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:35.863 [2024-07-12 09:18:22.104049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:35.863 [2024-07-12 09:18:22.104072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.863 [2024-07-12 09:18:22.106265] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:35.863 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70963 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70963 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70963 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_nzqla.txt 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:35.863 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_nzqla.txt 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70934 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 70934 ']' 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 70934 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70934 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.122 killing process with pid 70934 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70934' 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 70934 00:12:36.122 09:18:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 70934 00:12:38.021 09:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:38.021 09:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:38.021 ************************************ 00:12:38.021 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:38.021 ************************************ 00:12:38.021 00:12:38.021 real 0m5.719s 00:12:38.021 user 0m19.673s 00:12:38.021 sys 0m0.556s 00:12:38.021 09:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.021 09:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:38.279 09:18:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:38.279 09:18:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:38.279 09:18:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:38.279 09:18:24 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:38.279 09:18:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.279 09:18:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.279 ************************************ 00:12:38.279 START TEST nvme_fio 00:12:38.279 ************************************ 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:38.279 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:38.279 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:38.537 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:38.537 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:38.796 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:38.796 09:18:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:38.796 09:18:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:38.796 09:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:38.796 09:18:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:38.796 09:18:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:38.796 09:18:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:38.796 09:18:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:39.055 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:39.055 fio-3.35 00:12:39.055 Starting 1 thread 00:12:42.376 00:12:42.376 test: (groupid=0, jobs=1): err= 0: pid=71112: Fri Jul 12 09:18:28 2024 00:12:42.376 read: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(114MiB/2001msec) 00:12:42.376 slat (nsec): min=4668, max=46422, avg=6722.35, stdev=2102.41 00:12:42.376 clat (usec): min=245, max=9859, avg=4345.30, stdev=828.59 00:12:42.376 lat (usec): min=251, max=9906, avg=4352.02, stdev=829.64 00:12:42.376 clat percentiles (usec): 00:12:42.376 | 1.00th=[ 3228], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3851], 00:12:42.376 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:12:42.376 | 70.00th=[ 4424], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 6390], 00:12:42.376 | 99.00th=[ 7570], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:42.376 | 99.99th=[ 9765] 00:12:42.376 bw ( KiB/s): min=55760, max=65976, per=100.00%, avg=60976.00, stdev=5111.42, samples=3 00:12:42.376 iops : min=13940, max=16494, avg=15244.00, stdev=1277.86, samples=3 00:12:42.376 write: IOPS=14.7k, BW=57.3MiB/s (60.0MB/s)(115MiB/2001msec); 0 zone resets 00:12:42.376 slat (usec): min=4, max=107, avg= 6.98, stdev= 2.33 00:12:42.376 clat (usec): min=271, max=9719, avg=4360.74, stdev=834.56 00:12:42.376 lat (usec): min=278, max=9738, avg=4367.72, stdev=835.67 00:12:42.376 clat percentiles (usec): 00:12:42.376 | 1.00th=[ 3228], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3851], 00:12:42.376 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4178], 00:12:42.376 | 70.00th=[ 4490], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 6456], 00:12:42.376 | 99.00th=[ 7570], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 8848], 00:12:42.376 | 99.99th=[ 9503] 00:12:42.376 bw ( KiB/s): min=54928, max=65224, per=100.00%, avg=60562.67, stdev=5216.55, samples=3 00:12:42.376 iops : min=13730, max=16306, avg=15140.67, stdev=1305.41, samples=3 00:12:42.376 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:42.376 lat (msec) : 2=0.10%, 4=42.90%, 10=56.95% 00:12:42.376 cpu : usr=98.90%, sys=0.10%, ctx=3, majf=0, minf=605 00:12:42.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:42.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.376 issued rwts: total=29268,29333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.376 00:12:42.376 Run status group 0 (all jobs): 00:12:42.376 READ: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=114MiB (120MB), run=2001-2001msec 00:12:42.376 WRITE: bw=57.3MiB/s (60.0MB/s), 57.3MiB/s-57.3MiB/s (60.0MB/s-60.0MB/s), io=115MiB (120MB), run=2001-2001msec 00:12:42.376 ----------------------------------------------------- 00:12:42.376 Suppressions used: 00:12:42.376 count bytes template 00:12:42.376 1 32 /usr/src/fio/parse.c 00:12:42.376 1 8 libtcmalloc_minimal.so 00:12:42.376 ----------------------------------------------------- 00:12:42.376 00:12:42.376 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:42.376 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:42.376 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:42.376 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:42.376 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:42.376 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:42.634 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:42.634 09:18:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:42.634 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:42.892 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:42.892 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:42.892 09:18:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:42.892 09:18:29 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:42.892 09:18:29 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:42.892 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:42.892 fio-3.35 00:12:42.892 Starting 1 thread 00:12:46.173 00:12:46.173 test: (groupid=0, jobs=1): err= 0: pid=71173: Fri Jul 12 09:18:32 2024 00:12:46.173 read: IOPS=15.7k, BW=61.3MiB/s (64.2MB/s)(123MiB/2001msec) 00:12:46.173 slat (nsec): min=4673, max=82040, avg=6327.42, stdev=1866.92 00:12:46.173 clat (usec): min=310, max=8361, avg=4061.70, stdev=660.20 00:12:46.173 lat (usec): min=316, max=8412, avg=4068.03, stdev=661.13 00:12:46.173 clat percentiles (usec): 00:12:46.173 | 1.00th=[ 3359], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3621], 00:12:46.173 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3949], 00:12:46.173 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4948], 00:12:46.173 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 8029], 00:12:46.173 | 99.99th=[ 8225] 00:12:46.173 bw ( KiB/s): min=56592, max=69840, per=98.89%, avg=62029.33, stdev=6935.55, samples=3 00:12:46.173 iops : min=14148, max=17460, avg=15507.33, stdev=1733.89, samples=3 00:12:46.173 write: IOPS=15.7k, BW=61.3MiB/s (64.3MB/s)(123MiB/2001msec); 0 zone resets 00:12:46.173 slat (nsec): min=4756, max=48639, avg=6529.20, stdev=1890.77 00:12:46.173 clat (usec): min=339, max=8277, avg=4065.09, stdev=657.75 00:12:46.173 lat (usec): min=346, max=8298, avg=4071.62, stdev=658.68 00:12:46.173 clat percentiles (usec): 00:12:46.173 | 1.00th=[ 3392], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3621], 00:12:46.173 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3949], 00:12:46.173 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4948], 00:12:46.173 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7898], 99.95th=[ 8029], 00:12:46.173 | 99.99th=[ 8160] 00:12:46.173 bw ( KiB/s): min=56880, max=69056, per=98.09%, avg=61589.33, stdev=6539.56, samples=3 00:12:46.173 iops : min=14220, max=17264, avg=15397.33, stdev=1634.89, samples=3 00:12:46.173 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:46.173 lat (msec) : 2=0.04%, 4=61.52%, 10=38.40% 00:12:46.173 cpu : usr=98.95%, sys=0.20%, ctx=4, majf=0, minf=606 00:12:46.174 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:46.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.174 issued rwts: total=31379,31409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.174 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.174 00:12:46.174 Run status group 0 (all jobs): 00:12:46.174 READ: bw=61.3MiB/s (64.2MB/s), 61.3MiB/s-61.3MiB/s (64.2MB/s-64.2MB/s), io=123MiB (129MB), run=2001-2001msec 00:12:46.174 WRITE: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=123MiB (129MB), run=2001-2001msec 00:12:46.174 ----------------------------------------------------- 00:12:46.174 Suppressions used: 00:12:46.174 count bytes template 00:12:46.174 1 32 /usr/src/fio/parse.c 00:12:46.174 1 8 libtcmalloc_minimal.so 00:12:46.174 ----------------------------------------------------- 00:12:46.174 00:12:46.174 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:46.174 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:46.174 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:46.174 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:46.431 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:46.431 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:46.688 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:46.688 09:18:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:46.689 09:18:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:46.946 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:46.946 fio-3.35 00:12:46.946 Starting 1 thread 00:12:50.271 00:12:50.271 test: (groupid=0, jobs=1): err= 0: pid=71238: Fri Jul 12 09:18:36 2024 00:12:50.271 read: IOPS=16.2k, BW=63.1MiB/s (66.2MB/s)(126MiB/2001msec) 00:12:50.271 slat (nsec): min=4667, max=48270, avg=6195.01, stdev=1903.22 00:12:50.271 clat (usec): min=300, max=9004, avg=3935.86, stdev=612.41 00:12:50.271 lat (usec): min=306, max=9011, avg=3942.06, stdev=613.18 00:12:50.271 clat percentiles (usec): 00:12:50.271 | 1.00th=[ 2966], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:12:50.271 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3818], 00:12:50.271 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4752], 00:12:50.271 | 99.00th=[ 6718], 99.50th=[ 7373], 99.90th=[ 8455], 99.95th=[ 8586], 00:12:50.271 | 99.99th=[ 8717] 00:12:50.271 bw ( KiB/s): min=61984, max=70568, per=100.00%, avg=67341.33, stdev=4671.84, samples=3 00:12:50.271 iops : min=15496, max=17642, avg=16835.33, stdev=1167.96, samples=3 00:12:50.271 write: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(126MiB/2001msec); 0 zone resets 00:12:50.271 slat (nsec): min=4771, max=89205, avg=6320.15, stdev=1994.07 00:12:50.271 clat (usec): min=338, max=9098, avg=3949.91, stdev=629.89 00:12:50.271 lat (usec): min=345, max=9106, avg=3956.23, stdev=630.71 00:12:50.271 clat percentiles (usec): 00:12:50.271 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3490], 20.00th=[ 3556], 00:12:50.271 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:12:50.271 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4752], 00:12:50.271 | 99.00th=[ 6849], 99.50th=[ 7439], 99.90th=[ 8455], 99.95th=[ 8586], 00:12:50.271 | 99.99th=[ 8848] 00:12:50.271 bw ( KiB/s): min=62456, max=70160, per=100.00%, avg=67250.67, stdev=4183.75, samples=3 00:12:50.271 iops : min=15614, max=17540, avg=16812.67, stdev=1045.94, samples=3 00:12:50.271 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:50.271 lat (msec) : 2=0.11%, 4=67.92%, 10=31.94% 00:12:50.271 cpu : usr=99.05%, sys=0.05%, ctx=6, majf=0, minf=605 00:12:50.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:50.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:50.271 issued rwts: total=32328,32379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:50.271 00:12:50.271 Run status group 0 (all jobs): 00:12:50.271 READ: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=126MiB (132MB), run=2001-2001msec 00:12:50.271 WRITE: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=126MiB (133MB), run=2001-2001msec 00:12:50.271 ----------------------------------------------------- 00:12:50.271 Suppressions used: 00:12:50.271 count bytes template 00:12:50.271 1 32 /usr/src/fio/parse.c 00:12:50.271 1 8 libtcmalloc_minimal.so 00:12:50.271 ----------------------------------------------------- 00:12:50.271 00:12:50.530 09:18:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:50.530 09:18:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:50.530 09:18:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:50.530 09:18:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:50.789 09:18:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:50.789 09:18:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:51.048 09:18:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:51.048 09:18:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:51.048 09:18:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:51.048 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:51.048 fio-3.35 00:12:51.048 Starting 1 thread 00:12:55.235 00:12:55.235 test: (groupid=0, jobs=1): err= 0: pid=71299: Fri Jul 12 09:18:41 2024 00:12:55.235 read: IOPS=15.9k, BW=62.3MiB/s (65.3MB/s)(125MiB/2001msec) 00:12:55.235 slat (nsec): min=4687, max=45107, avg=6221.31, stdev=1799.08 00:12:55.235 clat (usec): min=296, max=8895, avg=3990.60, stdev=586.54 00:12:55.235 lat (usec): min=302, max=8940, avg=3996.82, stdev=587.19 00:12:55.235 clat percentiles (usec): 00:12:55.235 | 1.00th=[ 2737], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3654], 00:12:55.235 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3916], 00:12:55.235 | 70.00th=[ 4047], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5014], 00:12:55.235 | 99.00th=[ 5800], 99.50th=[ 6652], 99.90th=[ 8291], 99.95th=[ 8455], 00:12:55.235 | 99.99th=[ 8717] 00:12:55.235 bw ( KiB/s): min=59432, max=67952, per=98.82%, avg=63021.33, stdev=4415.54, samples=3 00:12:55.235 iops : min=14858, max=16988, avg=15755.33, stdev=1103.88, samples=3 00:12:55.235 write: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec); 0 zone resets 00:12:55.235 slat (nsec): min=4794, max=65152, avg=6393.06, stdev=1861.14 00:12:55.235 clat (usec): min=251, max=8816, avg=3998.57, stdev=589.18 00:12:55.235 lat (usec): min=257, max=8834, avg=4004.96, stdev=589.87 00:12:55.235 clat percentiles (usec): 00:12:55.235 | 1.00th=[ 2737], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3654], 00:12:55.235 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3949], 00:12:55.235 | 70.00th=[ 4047], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5014], 00:12:55.235 | 99.00th=[ 5800], 99.50th=[ 6652], 99.90th=[ 8225], 99.95th=[ 8455], 00:12:55.235 | 99.99th=[ 8586] 00:12:55.235 bw ( KiB/s): min=58928, max=67120, per=98.10%, avg=62682.67, stdev=4138.45, samples=3 00:12:55.235 iops : min=14732, max=16780, avg=15670.67, stdev=1034.61, samples=3 00:12:55.235 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:55.235 lat (msec) : 2=0.07%, 4=66.55%, 10=33.34% 00:12:55.235 cpu : usr=98.95%, sys=0.00%, ctx=4, majf=0, minf=603 00:12:55.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:55.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.235 issued rwts: total=31903,31963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.235 00:12:55.235 Run status group 0 (all jobs): 00:12:55.235 READ: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=125MiB (131MB), run=2001-2001msec 00:12:55.235 WRITE: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:12:55.494 ----------------------------------------------------- 00:12:55.494 Suppressions used: 00:12:55.494 count bytes template 00:12:55.494 1 32 /usr/src/fio/parse.c 00:12:55.494 1 8 libtcmalloc_minimal.so 00:12:55.494 ----------------------------------------------------- 00:12:55.494 00:12:55.494 ************************************ 00:12:55.494 END TEST nvme_fio 00:12:55.494 ************************************ 00:12:55.494 09:18:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:55.494 09:18:41 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:55.494 00:12:55.494 real 0m17.306s 00:12:55.494 user 0m13.629s 00:12:55.494 sys 0m2.870s 00:12:55.494 09:18:41 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.494 09:18:41 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:55.494 09:18:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:12:55.494 ************************************ 00:12:55.494 END TEST nvme 00:12:55.494 ************************************ 00:12:55.494 00:12:55.494 real 1m30.501s 00:12:55.494 user 3m43.522s 00:12:55.494 sys 0m14.600s 00:12:55.494 09:18:41 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.494 09:18:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.494 09:18:41 -- common/autotest_common.sh@1142 -- # return 0 00:12:55.494 09:18:41 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:12:55.494 09:18:41 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:55.494 09:18:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:55.494 09:18:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.494 09:18:41 -- common/autotest_common.sh@10 -- # set +x 00:12:55.494 ************************************ 00:12:55.494 START TEST nvme_scc 00:12:55.494 ************************************ 00:12:55.494 09:18:41 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:55.494 * Looking for test storage... 00:12:55.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:55.752 09:18:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:55.752 09:18:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:55.752 09:18:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:55.752 09:18:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:55.752 09:18:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.752 09:18:41 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.752 09:18:41 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.753 09:18:41 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.753 09:18:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.753 09:18:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.753 09:18:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.753 09:18:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:55.753 09:18:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:55.753 09:18:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:55.753 09:18:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.753 09:18:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:55.753 09:18:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:55.753 09:18:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:55.753 09:18:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:56.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.011 Waiting for block devices as requested 00:12:56.273 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.273 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.273 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.534 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.818 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:01.818 09:18:47 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:01.818 09:18:47 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:01.818 09:18:47 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:01.818 09:18:47 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.818 09:18:47 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.818 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.819 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.820 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.821 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:01.822 09:18:47 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:01.822 09:18:47 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:01.822 09:18:47 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.822 09:18:47 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.822 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.823 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.824 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.825 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:01.826 09:18:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:01.827 09:18:47 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:01.827 09:18:47 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:01.827 09:18:47 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:01.827 09:18:47 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:01.827 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.828 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:01.829 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:01.830 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.831 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.832 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.833 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:01.834 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:02.095 09:18:48 nvme_scc -- scripts/common.sh@15 -- # local i 00:13:02.095 09:18:48 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:02.095 09:18:48 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:02.095 09:18:48 nvme_scc -- scripts/common.sh@24 -- # return 0 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:02.095 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:02.096 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:02.097 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:02.098 09:18:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:13:02.098 09:18:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:13:02.099 09:18:48 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:13:02.099 09:18:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:02.099 09:18:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:02.099 09:18:48 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:02.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:03.252 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.252 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.252 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.252 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.252 09:18:49 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:03.252 09:18:49 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:03.252 09:18:49 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.252 09:18:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:03.252 ************************************ 00:13:03.252 START TEST nvme_simple_copy 00:13:03.252 ************************************ 00:13:03.252 09:18:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:03.510 Initializing NVMe Controllers 00:13:03.510 Attaching to 0000:00:10.0 00:13:03.510 Controller supports SCC. Attached to 0000:00:10.0 00:13:03.510 Namespace ID: 1 size: 6GB 00:13:03.510 Initialization complete. 00:13:03.510 00:13:03.510 Controller QEMU NVMe Ctrl (12340 ) 00:13:03.510 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:03.510 Namespace Block Size:4096 00:13:03.510 Writing LBAs 0 to 63 with Random Data 00:13:03.510 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:03.510 LBAs matching Written Data: 64 00:13:03.510 00:13:03.510 real 0m0.314s 00:13:03.510 user 0m0.128s 00:13:03.510 sys 0m0.085s 00:13:03.510 09:18:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.510 09:18:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:03.510 ************************************ 00:13:03.510 END TEST nvme_simple_copy 00:13:03.510 ************************************ 00:13:03.510 09:18:49 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:13:03.510 00:13:03.510 real 0m8.037s 00:13:03.510 user 0m1.323s 00:13:03.510 sys 0m1.574s 00:13:03.510 09:18:49 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.510 ************************************ 00:13:03.510 09:18:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:03.510 END TEST nvme_scc 00:13:03.510 ************************************ 00:13:03.510 09:18:49 -- common/autotest_common.sh@1142 -- # return 0 00:13:03.510 09:18:49 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:03.510 09:18:49 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:13:03.510 09:18:49 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:13:03.510 09:18:49 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:13:03.510 09:18:49 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:03.510 09:18:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:03.510 09:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.510 09:18:49 -- common/autotest_common.sh@10 -- # set +x 00:13:03.510 ************************************ 00:13:03.510 START TEST nvme_fdp 00:13:03.510 ************************************ 00:13:03.767 09:18:49 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:13:03.767 * Looking for test storage... 00:13:03.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:03.767 09:18:49 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:03.767 09:18:49 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.767 09:18:49 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.767 09:18:49 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.767 09:18:49 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.767 09:18:49 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.767 09:18:49 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.767 09:18:49 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:03.767 09:18:49 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:03.767 09:18:49 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:03.767 09:18:49 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.767 09:18:49 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:04.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:04.283 Waiting for block devices as requested 00:13:04.283 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.283 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.540 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.540 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:09.806 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:09.806 09:18:55 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:09.806 09:18:55 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:09.806 09:18:55 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:09.806 09:18:55 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:09.806 09:18:55 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:09.806 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.807 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.808 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.809 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:09.810 09:18:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:09.810 09:18:55 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:09.810 09:18:55 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:09.811 09:18:55 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:09.811 09:18:55 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.811 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.812 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.813 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.814 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:09.815 09:18:56 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:09.815 09:18:56 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:09.815 09:18:56 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:09.815 09:18:56 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.815 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.816 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:09.817 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:09.818 09:18:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:10.080 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:10.081 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.082 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.083 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:10.084 09:18:56 nvme_fdp -- scripts/common.sh@15 -- # local i 00:13:10.084 09:18:56 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:10.084 09:18:56 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:10.084 09:18:56 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.084 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.085 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:10.086 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:10.087 09:18:56 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:10.087 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:13:10.088 09:18:56 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:13:10.088 09:18:56 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:10.088 09:18:56 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:10.088 09:18:56 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:10.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:11.293 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.293 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.293 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.293 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:11.293 09:18:57 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:11.293 09:18:57 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:11.293 09:18:57 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.293 09:18:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:11.551 ************************************ 00:13:11.551 START TEST nvme_flexible_data_placement 00:13:11.551 ************************************ 00:13:11.551 09:18:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:11.809 Initializing NVMe Controllers 00:13:11.809 Attaching to 0000:00:13.0 00:13:11.809 Controller supports FDP Attached to 0000:00:13.0 00:13:11.809 Namespace ID: 1 Endurance Group ID: 1 00:13:11.809 Initialization complete. 00:13:11.809 00:13:11.809 ================================== 00:13:11.809 == FDP tests for Namespace: #01 == 00:13:11.809 ================================== 00:13:11.809 00:13:11.809 Get Feature: FDP: 00:13:11.809 ================= 00:13:11.809 Enabled: Yes 00:13:11.809 FDP configuration Index: 0 00:13:11.809 00:13:11.809 FDP configurations log page 00:13:11.809 =========================== 00:13:11.809 Number of FDP configurations: 1 00:13:11.809 Version: 0 00:13:11.809 Size: 112 00:13:11.809 FDP Configuration Descriptor: 0 00:13:11.809 Descriptor Size: 96 00:13:11.809 Reclaim Group Identifier format: 2 00:13:11.809 FDP Volatile Write Cache: Not Present 00:13:11.809 FDP Configuration: Valid 00:13:11.809 Vendor Specific Size: 0 00:13:11.809 Number of Reclaim Groups: 2 00:13:11.809 Number of Recalim Unit Handles: 8 00:13:11.809 Max Placement Identifiers: 128 00:13:11.809 Number of Namespaces Suppprted: 256 00:13:11.809 Reclaim unit Nominal Size: 6000000 bytes 00:13:11.809 Estimated Reclaim Unit Time Limit: Not Reported 00:13:11.809 RUH Desc #000: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #001: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #002: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #003: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #004: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #005: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #006: RUH Type: Initially Isolated 00:13:11.809 RUH Desc #007: RUH Type: Initially Isolated 00:13:11.809 00:13:11.809 FDP reclaim unit handle usage log page 00:13:11.809 ====================================== 00:13:11.809 Number of Reclaim Unit Handles: 8 00:13:11.809 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:11.809 RUH Usage Desc #001: RUH Attributes: Unused 00:13:11.809 RUH Usage Desc #002: RUH Attributes: Unused 00:13:11.809 RUH Usage Desc #003: RUH Attributes: Unused 00:13:11.809 RUH Usage Desc #004: RUH Attributes: Unused 00:13:11.809 RUH Usage Desc #005: RUH Attributes: Unused 00:13:11.809 RUH Usage Desc #006: RUH Attributes: Unused 00:13:11.809 RUH Usage Desc #007: RUH Attributes: Unused 00:13:11.809 00:13:11.809 FDP statistics log page 00:13:11.809 ======================= 00:13:11.809 Host bytes with metadata written: 817258496 00:13:11.809 Media bytes with metadata written: 817430528 00:13:11.809 Media bytes erased: 0 00:13:11.809 00:13:11.809 FDP Reclaim unit handle status 00:13:11.809 ============================== 00:13:11.809 Number of RUHS descriptors: 2 00:13:11.809 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000549a 00:13:11.809 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:11.809 00:13:11.809 FDP write on placement id: 0 success 00:13:11.809 00:13:11.809 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:11.809 00:13:11.809 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:11.809 00:13:11.809 Get Feature: FDP Events for Placement handle: #0 00:13:11.810 ======================== 00:13:11.810 Number of FDP Events: 6 00:13:11.810 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:11.810 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:11.810 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:11.810 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:11.810 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:11.810 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:11.810 00:13:11.810 FDP events log page 00:13:11.810 =================== 00:13:11.810 Number of FDP events: 1 00:13:11.810 FDP Event #0: 00:13:11.810 Event Type: RU Not Written to Capacity 00:13:11.810 Placement Identifier: Valid 00:13:11.810 NSID: Valid 00:13:11.810 Location: Valid 00:13:11.810 Placement Identifier: 0 00:13:11.810 Event Timestamp: 7 00:13:11.810 Namespace Identifier: 1 00:13:11.810 Reclaim Group Identifier: 0 00:13:11.810 Reclaim Unit Handle Identifier: 0 00:13:11.810 00:13:11.810 FDP test passed 00:13:11.810 00:13:11.810 real 0m0.274s 00:13:11.810 user 0m0.090s 00:13:11.810 sys 0m0.082s 00:13:11.810 09:18:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.810 ************************************ 00:13:11.810 END TEST nvme_flexible_data_placement 00:13:11.810 ************************************ 00:13:11.810 09:18:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 09:18:57 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:13:11.810 ************************************ 00:13:11.810 END TEST nvme_fdp 00:13:11.810 ************************************ 00:13:11.810 00:13:11.810 real 0m8.098s 00:13:11.810 user 0m1.321s 00:13:11.810 sys 0m1.659s 00:13:11.810 09:18:57 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.810 09:18:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 09:18:57 -- common/autotest_common.sh@1142 -- # return 0 00:13:11.810 09:18:57 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:13:11.810 09:18:57 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:11.810 09:18:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:11.810 09:18:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.810 09:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:11.810 ************************************ 00:13:11.810 START TEST nvme_rpc 00:13:11.810 ************************************ 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:11.810 * Looking for test storage... 00:13:11.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:11.810 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.810 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:11.810 09:18:58 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:13:12.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.069 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:12.069 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72630 00:13:12.069 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:12.069 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:12.069 09:18:58 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72630 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72630 ']' 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.069 09:18:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.069 [2024-07-12 09:18:58.280019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:12.069 [2024-07-12 09:18:58.280207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72630 ] 00:13:12.327 [2024-07-12 09:18:58.455092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:12.585 [2024-07-12 09:18:58.680671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.585 [2024-07-12 09:18:58.680675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.151 09:18:59 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.151 09:18:59 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:13.151 09:18:59 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:13.718 Nvme0n1 00:13:13.718 09:18:59 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:13.718 09:18:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:13.718 request: 00:13:13.718 { 00:13:13.718 "bdev_name": "Nvme0n1", 00:13:13.718 "filename": "non_existing_file", 00:13:13.718 "method": "bdev_nvme_apply_firmware", 00:13:13.718 "req_id": 1 00:13:13.718 } 00:13:13.718 Got JSON-RPC error response 00:13:13.718 response: 00:13:13.718 { 00:13:13.718 "code": -32603, 00:13:13.718 "message": "open file failed." 00:13:13.718 } 00:13:13.718 09:19:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:13.718 09:19:00 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:13.718 09:19:00 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:13.976 09:19:00 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:13.976 09:19:00 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72630 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72630 ']' 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72630 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72630 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72630' 00:13:13.976 killing process with pid 72630 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72630 00:13:13.976 09:19:00 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72630 00:13:16.505 00:13:16.505 real 0m4.322s 00:13:16.505 user 0m8.256s 00:13:16.505 sys 0m0.574s 00:13:16.505 ************************************ 00:13:16.505 END TEST nvme_rpc 00:13:16.505 ************************************ 00:13:16.505 09:19:02 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.505 09:19:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.505 09:19:02 -- common/autotest_common.sh@1142 -- # return 0 00:13:16.505 09:19:02 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:16.505 09:19:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:16.505 09:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.505 09:19:02 -- common/autotest_common.sh@10 -- # set +x 00:13:16.505 ************************************ 00:13:16.505 START TEST nvme_rpc_timeouts 00:13:16.505 ************************************ 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:16.505 * Looking for test storage... 00:13:16.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72707 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72707 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72731 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:16.505 09:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72731 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72731 ']' 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.505 09:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:16.505 [2024-07-12 09:19:02.579575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:16.505 [2024-07-12 09:19:02.580003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72731 ] 00:13:16.505 [2024-07-12 09:19:02.752001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:16.764 [2024-07-12 09:19:02.940134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.764 [2024-07-12 09:19:02.940142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.695 Checking default timeout settings: 00:13:17.695 09:19:03 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.695 09:19:03 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:13:17.695 09:19:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:17.695 09:19:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:17.953 Making settings changes with rpc: 00:13:17.953 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:17.953 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:18.211 Check default vs. modified settings: 00:13:18.211 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:18.211 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:18.470 Setting action_on_timeout is changed as expected. 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:18.470 Setting timeout_us is changed as expected. 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:18.470 Setting timeout_admin_us is changed as expected. 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72707 /tmp/settings_modified_72707 00:13:18.470 09:19:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72731 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72731 ']' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72731 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72731 00:13:18.470 killing process with pid 72731 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72731' 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72731 00:13:18.470 09:19:04 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72731 00:13:21.005 RPC TIMEOUT SETTING TEST PASSED. 00:13:21.005 09:19:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:21.005 ************************************ 00:13:21.005 END TEST nvme_rpc_timeouts 00:13:21.005 ************************************ 00:13:21.005 00:13:21.005 real 0m4.488s 00:13:21.005 user 0m8.695s 00:13:21.005 sys 0m0.572s 00:13:21.005 09:19:06 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.005 09:19:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:21.005 09:19:06 -- common/autotest_common.sh@1142 -- # return 0 00:13:21.005 09:19:06 -- spdk/autotest.sh@243 -- # uname -s 00:13:21.005 09:19:06 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:13:21.006 09:19:06 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:21.006 09:19:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:21.006 09:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.006 09:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.006 ************************************ 00:13:21.006 START TEST sw_hotplug 00:13:21.006 ************************************ 00:13:21.006 09:19:06 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:21.006 * Looking for test storage... 00:13:21.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:21.006 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:21.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:21.264 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:21.264 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:21.264 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:21.264 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@230 -- # local class 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@15 -- # local i 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:13:21.264 09:19:07 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:21.264 09:19:07 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:21.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:21.830 Waiting for block devices as requested 00:13:21.830 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:22.087 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:22.087 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:22.087 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.353 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:27.353 09:19:13 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:27.353 09:19:13 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:27.611 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:27.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:27.611 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:28.177 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:28.177 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.177 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:28.435 09:19:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73591 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:28.435 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:28.435 09:19:14 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:28.435 09:19:14 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:28.436 09:19:14 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:28.436 09:19:14 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:28.436 09:19:14 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:13:28.436 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:28.436 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:28.436 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:28.436 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:28.436 09:19:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:28.694 Initializing NVMe Controllers 00:13:28.694 Attaching to 0000:00:10.0 00:13:28.694 Attaching to 0000:00:11.0 00:13:28.694 Attached to 0000:00:11.0 00:13:28.694 Attached to 0000:00:10.0 00:13:28.694 Initialization complete. Starting I/O... 00:13:28.694 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:28.694 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:28.694 00:13:29.627 QEMU NVMe Ctrl (12341 ): 1038 I/Os completed (+1038) 00:13:29.628 QEMU NVMe Ctrl (12340 ): 1122 I/Os completed (+1122) 00:13:29.628 00:13:31.001 QEMU NVMe Ctrl (12341 ): 2414 I/Os completed (+1376) 00:13:31.001 QEMU NVMe Ctrl (12340 ): 2582 I/Os completed (+1460) 00:13:31.001 00:13:31.935 QEMU NVMe Ctrl (12341 ): 4094 I/Os completed (+1680) 00:13:31.935 QEMU NVMe Ctrl (12340 ): 4400 I/Os completed (+1818) 00:13:31.935 00:13:32.870 QEMU NVMe Ctrl (12341 ): 5888 I/Os completed (+1794) 00:13:32.870 QEMU NVMe Ctrl (12340 ): 6293 I/Os completed (+1893) 00:13:32.870 00:13:33.804 QEMU NVMe Ctrl (12341 ): 7544 I/Os completed (+1656) 00:13:33.804 QEMU NVMe Ctrl (12340 ): 8042 I/Os completed (+1749) 00:13:33.804 00:13:34.370 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:34.370 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.370 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.370 [2024-07-12 09:19:20.695926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:34.370 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:34.370 [2024-07-12 09:19:20.697973] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 [2024-07-12 09:19:20.698170] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 [2024-07-12 09:19:20.698262] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 [2024-07-12 09:19:20.698445] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:34.370 [2024-07-12 09:19:20.703149] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 [2024-07-12 09:19:20.703346] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 [2024-07-12 09:19:20.703418] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 [2024-07-12 09:19:20.703556] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.370 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.370 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.628 [2024-07-12 09:19:20.722467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:34.628 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:34.628 [2024-07-12 09:19:20.724156] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 [2024-07-12 09:19:20.724226] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 [2024-07-12 09:19:20.724261] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 [2024-07-12 09:19:20.724288] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:34.628 [2024-07-12 09:19:20.726853] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 [2024-07-12 09:19:20.727031] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 [2024-07-12 09:19:20.727215] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 [2024-07-12 09:19:20.727250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:34.628 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:34.628 EAL: Scan for (pci) bus failed. 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:34.628 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:34.628 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:34.628 Attaching to 0000:00:10.0 00:13:34.628 Attached to 0000:00:10.0 00:13:34.886 09:19:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:34.886 09:19:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:34.886 09:19:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:34.886 Attaching to 0000:00:11.0 00:13:34.886 Attached to 0000:00:11.0 00:13:35.822 QEMU NVMe Ctrl (12340 ): 1752 I/Os completed (+1752) 00:13:35.822 QEMU NVMe Ctrl (12341 ): 1670 I/Os completed (+1670) 00:13:35.822 00:13:36.759 QEMU NVMe Ctrl (12340 ): 3479 I/Os completed (+1727) 00:13:36.759 QEMU NVMe Ctrl (12341 ): 3590 I/Os completed (+1920) 00:13:36.759 00:13:37.693 QEMU NVMe Ctrl (12340 ): 5247 I/Os completed (+1768) 00:13:37.693 QEMU NVMe Ctrl (12341 ): 5432 I/Os completed (+1842) 00:13:37.693 00:13:38.627 QEMU NVMe Ctrl (12340 ): 6918 I/Os completed (+1671) 00:13:38.627 QEMU NVMe Ctrl (12341 ): 7170 I/Os completed (+1738) 00:13:38.627 00:13:40.005 QEMU NVMe Ctrl (12340 ): 8566 I/Os completed (+1648) 00:13:40.005 QEMU NVMe Ctrl (12341 ): 8956 I/Os completed (+1786) 00:13:40.005 00:13:41.026 QEMU NVMe Ctrl (12340 ): 10201 I/Os completed (+1635) 00:13:41.026 QEMU NVMe Ctrl (12341 ): 10731 I/Os completed (+1775) 00:13:41.026 00:13:41.592 QEMU NVMe Ctrl (12340 ): 11793 I/Os completed (+1592) 00:13:41.593 QEMU NVMe Ctrl (12341 ): 12470 I/Os completed (+1739) 00:13:41.593 00:13:42.968 QEMU NVMe Ctrl (12340 ): 13645 I/Os completed (+1852) 00:13:42.968 QEMU NVMe Ctrl (12341 ): 14362 I/Os completed (+1892) 00:13:42.968 00:13:43.902 QEMU NVMe Ctrl (12340 ): 15405 I/Os completed (+1760) 00:13:43.902 QEMU NVMe Ctrl (12341 ): 16229 I/Os completed (+1867) 00:13:43.902 00:13:44.838 QEMU NVMe Ctrl (12340 ): 17113 I/Os completed (+1708) 00:13:44.838 QEMU NVMe Ctrl (12341 ): 18013 I/Os completed (+1784) 00:13:44.838 00:13:45.771 QEMU NVMe Ctrl (12340 ): 18700 I/Os completed (+1587) 00:13:45.771 QEMU NVMe Ctrl (12341 ): 19860 I/Os completed (+1847) 00:13:45.771 00:13:46.702 QEMU NVMe Ctrl (12340 ): 20445 I/Os completed (+1745) 00:13:46.702 QEMU NVMe Ctrl (12341 ): 21795 I/Os completed (+1935) 00:13:46.702 00:13:46.702 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:46.702 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:46.702 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:46.702 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:46.702 [2024-07-12 09:19:33.030351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:46.702 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:46.702 [2024-07-12 09:19:33.032747] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 [2024-07-12 09:19:33.032842] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 [2024-07-12 09:19:33.032900] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 [2024-07-12 09:19:33.032941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:46.702 [2024-07-12 09:19:33.036530] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 [2024-07-12 09:19:33.036627] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 [2024-07-12 09:19:33.036659] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 [2024-07-12 09:19:33.036687] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.702 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:46.702 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:46.959 [2024-07-12 09:19:33.057739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:46.959 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:46.959 [2024-07-12 09:19:33.059804] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 [2024-07-12 09:19:33.059895] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 [2024-07-12 09:19:33.059935] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 [2024-07-12 09:19:33.059962] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:46.959 [2024-07-12 09:19:33.062963] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 [2024-07-12 09:19:33.063051] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 [2024-07-12 09:19:33.063085] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 [2024-07-12 09:19:33.063112] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:46.959 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:46.959 EAL: Scan for (pci) bus failed. 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.959 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:46.959 Attaching to 0000:00:10.0 00:13:46.959 Attached to 0000:00:10.0 00:13:47.216 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:47.216 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:47.216 09:19:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:47.216 Attaching to 0000:00:11.0 00:13:47.216 Attached to 0000:00:11.0 00:13:47.782 QEMU NVMe Ctrl (12340 ): 1030 I/Os completed (+1030) 00:13:47.782 QEMU NVMe Ctrl (12341 ): 970 I/Os completed (+970) 00:13:47.782 00:13:48.717 QEMU NVMe Ctrl (12340 ): 2740 I/Os completed (+1710) 00:13:48.717 QEMU NVMe Ctrl (12341 ): 2740 I/Os completed (+1770) 00:13:48.717 00:13:49.651 QEMU NVMe Ctrl (12340 ): 4569 I/Os completed (+1829) 00:13:49.651 QEMU NVMe Ctrl (12341 ): 4643 I/Os completed (+1903) 00:13:49.651 00:13:50.584 QEMU NVMe Ctrl (12340 ): 6379 I/Os completed (+1810) 00:13:50.584 QEMU NVMe Ctrl (12341 ): 6545 I/Os completed (+1902) 00:13:50.584 00:13:51.976 QEMU NVMe Ctrl (12340 ): 8087 I/Os completed (+1708) 00:13:51.976 QEMU NVMe Ctrl (12341 ): 8340 I/Os completed (+1795) 00:13:51.976 00:13:52.910 QEMU NVMe Ctrl (12340 ): 9826 I/Os completed (+1739) 00:13:52.910 QEMU NVMe Ctrl (12341 ): 10259 I/Os completed (+1919) 00:13:52.910 00:13:53.847 QEMU NVMe Ctrl (12340 ): 11723 I/Os completed (+1897) 00:13:53.847 QEMU NVMe Ctrl (12341 ): 12217 I/Os completed (+1958) 00:13:53.847 00:13:54.781 QEMU NVMe Ctrl (12340 ): 13400 I/Os completed (+1677) 00:13:54.781 QEMU NVMe Ctrl (12341 ): 14078 I/Os completed (+1861) 00:13:54.781 00:13:55.717 QEMU NVMe Ctrl (12340 ): 15228 I/Os completed (+1828) 00:13:55.717 QEMU NVMe Ctrl (12341 ): 15935 I/Os completed (+1857) 00:13:55.717 00:13:56.652 QEMU NVMe Ctrl (12340 ): 17014 I/Os completed (+1786) 00:13:56.652 QEMU NVMe Ctrl (12341 ): 17819 I/Os completed (+1884) 00:13:56.652 00:13:57.607 QEMU NVMe Ctrl (12340 ): 18874 I/Os completed (+1860) 00:13:57.607 QEMU NVMe Ctrl (12341 ): 19740 I/Os completed (+1921) 00:13:57.607 00:13:58.998 QEMU NVMe Ctrl (12340 ): 20622 I/Os completed (+1748) 00:13:58.998 QEMU NVMe Ctrl (12341 ): 21621 I/Os completed (+1881) 00:13:58.998 00:13:59.256 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:59.256 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:59.256 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:59.256 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:59.256 [2024-07-12 09:19:45.386449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:59.256 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:59.256 [2024-07-12 09:19:45.388276] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.256 [2024-07-12 09:19:45.388339] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.388368] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.388396] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:59.257 [2024-07-12 09:19:45.391268] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.391326] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.391350] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.391382] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:59.257 [2024-07-12 09:19:45.413776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:59.257 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:59.257 [2024-07-12 09:19:45.415438] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.415504] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.415535] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.415558] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:59.257 [2024-07-12 09:19:45.417970] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.418020] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.418048] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 [2024-07-12 09:19:45.418068] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:59.257 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:59.257 EAL: Scan for (pci) bus failed. 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.257 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:59.514 Attaching to 0000:00:10.0 00:13:59.514 Attached to 0000:00:10.0 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:59.514 09:19:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:59.514 Attaching to 0000:00:11.0 00:13:59.514 Attached to 0000:00:11.0 00:13:59.514 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:59.514 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:59.514 [2024-07-12 09:19:45.759331] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:11.712 09:19:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:11.712 09:19:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:11.712 09:19:57 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.06 00:14:11.712 09:19:57 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.06 00:14:11.712 09:19:57 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:14:11.712 09:19:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.06 00:14:11.712 09:19:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.06 2 00:14:11.712 remove_attach_helper took 43.06s to complete (handling 2 nvme drive(s)) 09:19:57 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73591 00:14:18.275 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73591) - No such process 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73591 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74130 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:18.275 09:20:03 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74130 00:14:18.275 09:20:03 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74130 ']' 00:14:18.275 09:20:03 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.275 09:20:03 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.275 09:20:03 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.275 09:20:03 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.275 09:20:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.275 [2024-07-12 09:20:03.866332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:18.275 [2024-07-12 09:20:03.866479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74130 ] 00:14:18.275 [2024-07-12 09:20:04.039250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.275 [2024-07-12 09:20:04.227480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:14:18.841 09:20:04 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:18.841 09:20:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:25.397 09:20:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:25.397 09:20:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.397 09:20:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:25.397 09:20:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.397 09:20:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.397 09:20:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.397 [2024-07-12 09:20:11.064933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:25.397 [2024-07-12 09:20:11.067825] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.067877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.067927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 [2024-07-12 09:20:11.067957] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.067978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.067993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 [2024-07-12 09:20:11.068011] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.068031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.068059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 [2024-07-12 09:20:11.068081] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.068101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.068117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:25.397 [2024-07-12 09:20:11.464942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:25.397 [2024-07-12 09:20:11.468036] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.468109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.468132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 [2024-07-12 09:20:11.468160] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.468175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.468209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 [2024-07-12 09:20:11.468226] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.468243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.468256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 [2024-07-12 09:20:11.468272] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.397 [2024-07-12 09:20:11.468285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.397 [2024-07-12 09:20:11.468300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:25.397 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:25.398 09:20:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.398 09:20:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.398 09:20:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.398 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.655 09:20:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:37.855 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:37.855 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:37.856 09:20:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 09:20:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 09:20:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:37.856 09:20:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:37.856 09:20:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.856 09:20:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:37.856 09:20:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.856 [2024-07-12 09:20:24.065148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:37.856 [2024-07-12 09:20:24.069537] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.856 [2024-07-12 09:20:24.069590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.856 [2024-07-12 09:20:24.069619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.856 [2024-07-12 09:20:24.069648] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.856 [2024-07-12 09:20:24.069665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.856 [2024-07-12 09:20:24.069680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.856 [2024-07-12 09:20:24.069697] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.856 [2024-07-12 09:20:24.069711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.856 [2024-07-12 09:20:24.069726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.856 [2024-07-12 09:20:24.069740] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.856 [2024-07-12 09:20:24.069755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.856 [2024-07-12 09:20:24.069768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.856 [2024-07-12 09:20:24.069789] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:37.856 [2024-07-12 09:20:24.069805] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:37.856 [2024-07-12 09:20:24.069828] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:37.856 [2024-07-12 09:20:24.069840] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:37.856 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:38.422 [2024-07-12 09:20:24.465131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:38.422 [2024-07-12 09:20:24.468063] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.422 [2024-07-12 09:20:24.468122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.422 [2024-07-12 09:20:24.468144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.422 [2024-07-12 09:20:24.468172] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.422 [2024-07-12 09:20:24.468202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.422 [2024-07-12 09:20:24.468221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.422 [2024-07-12 09:20:24.468237] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.422 [2024-07-12 09:20:24.468252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.422 [2024-07-12 09:20:24.468265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.422 [2024-07-12 09:20:24.468283] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.422 [2024-07-12 09:20:24.468306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:38.422 [2024-07-12 09:20:24.468334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.422 09:20:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.422 09:20:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:38.422 09:20:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:38.422 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:38.681 09:20:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:50.906 09:20:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.906 09:20:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:50.906 09:20:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:50.906 09:20:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:50.907 09:20:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.907 09:20:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:50.907 09:20:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.907 [2024-07-12 09:20:37.065478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:50.907 [2024-07-12 09:20:37.068819] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.907 [2024-07-12 09:20:37.068874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.907 [2024-07-12 09:20:37.068902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.907 [2024-07-12 09:20:37.068966] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.907 [2024-07-12 09:20:37.068994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.907 [2024-07-12 09:20:37.069010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.907 [2024-07-12 09:20:37.069029] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.907 [2024-07-12 09:20:37.069042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.907 [2024-07-12 09:20:37.069058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.907 [2024-07-12 09:20:37.069073] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:50.907 [2024-07-12 09:20:37.069088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.907 [2024-07-12 09:20:37.069102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:50.907 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:51.471 [2024-07-12 09:20:37.565452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:51.471 [2024-07-12 09:20:37.568709] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.471 [2024-07-12 09:20:37.568769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.471 [2024-07-12 09:20:37.568792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.471 [2024-07-12 09:20:37.568822] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.471 [2024-07-12 09:20:37.568838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.471 [2024-07-12 09:20:37.568857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.471 [2024-07-12 09:20:37.568872] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.471 [2024-07-12 09:20:37.568888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.471 [2024-07-12 09:20:37.568901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.471 [2024-07-12 09:20:37.568917] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:51.471 [2024-07-12 09:20:37.568930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.471 [2024-07-12 09:20:37.568946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:51.471 09:20:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.471 09:20:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:51.471 09:20:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.471 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:51.729 09:20:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:03.998 09:20:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:03.998 09:20:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:03.998 09:20:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:03.998 09:20:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.998 09:20:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.998 09:20:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.998 09:20:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.998 09:20:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.04 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.04 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.04 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.04 2 00:15:03.998 remove_attach_helper took 45.04s to complete (handling 2 nvme drive(s)) 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:15:03.998 09:20:50 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:03.998 09:20:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.571 09:20:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.571 09:20:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 09:20:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.571 [2024-07-12 09:20:56.133138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:10.571 [2024-07-12 09:20:56.135313] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.135367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.135392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 [2024-07-12 09:20:56.135420] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.135438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.135453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 [2024-07-12 09:20:56.135470] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.135484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.135504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 [2024-07-12 09:20:56.135518] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.135533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.135546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:10.571 [2024-07-12 09:20:56.533210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:10.571 [2024-07-12 09:20:56.537406] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.537502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.537538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 [2024-07-12 09:20:56.537578] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.537596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.537613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 [2024-07-12 09:20:56.537629] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.537645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.537658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 [2024-07-12 09:20:56.537674] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:10.571 [2024-07-12 09:20:56.537698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.571 [2024-07-12 09:20:56.537718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.571 09:20:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.571 09:20:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 09:20:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:10.571 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:10.829 09:20:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:10.829 09:20:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:10.829 09:20:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.100 09:21:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.100 09:21:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.100 09:21:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.100 09:21:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.100 09:21:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.100 [2024-07-12 09:21:09.133328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:23.100 [2024-07-12 09:21:09.136014] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.100 [2024-07-12 09:21:09.136067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.100 [2024-07-12 09:21:09.136092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.100 [2024-07-12 09:21:09.136120] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.100 [2024-07-12 09:21:09.136138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.100 [2024-07-12 09:21:09.136152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.100 [2024-07-12 09:21:09.136170] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.100 [2024-07-12 09:21:09.136200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.100 [2024-07-12 09:21:09.136220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.100 [2024-07-12 09:21:09.136236] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.100 [2024-07-12 09:21:09.136252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.100 [2024-07-12 09:21:09.136266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.100 09:21:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:23.100 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:23.358 [2024-07-12 09:21:09.533355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:23.359 [2024-07-12 09:21:09.536477] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.359 [2024-07-12 09:21:09.536572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.359 [2024-07-12 09:21:09.536595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.359 [2024-07-12 09:21:09.536624] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.359 [2024-07-12 09:21:09.536640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.359 [2024-07-12 09:21:09.536657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.359 [2024-07-12 09:21:09.536672] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.359 [2024-07-12 09:21:09.536687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.359 [2024-07-12 09:21:09.536701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.359 [2024-07-12 09:21:09.536717] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.359 [2024-07-12 09:21:09.536731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.359 [2024-07-12 09:21:09.536746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.359 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:23.359 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.359 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.359 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.359 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.359 09:21:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.359 09:21:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.359 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.359 09:21:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.616 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.617 09:21:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:23.874 09:21:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:23.874 09:21:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.874 09:21:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.144 09:21:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.144 09:21:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.144 09:21:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:36.144 [2024-07-12 09:21:22.133980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:36.144 [2024-07-12 09:21:22.136240] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.144 [2024-07-12 09:21:22.136318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.144 [2024-07-12 09:21:22.136346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-07-12 09:21:22.136372] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.144 [2024-07-12 09:21:22.136393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.144 [2024-07-12 09:21:22.136407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-07-12 09:21:22.136425] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.144 [2024-07-12 09:21:22.136439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.144 [2024-07-12 09:21:22.136454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 [2024-07-12 09:21:22.136469] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.144 [2024-07-12 09:21:22.136483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.144 [2024-07-12 09:21:22.136497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.144 09:21:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.144 09:21:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.144 09:21:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:36.144 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:36.415 [2024-07-12 09:21:22.534052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:36.415 [2024-07-12 09:21:22.536193] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.415 [2024-07-12 09:21:22.536280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.415 [2024-07-12 09:21:22.536303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.415 [2024-07-12 09:21:22.536331] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.415 [2024-07-12 09:21:22.536347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.415 [2024-07-12 09:21:22.536363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.415 [2024-07-12 09:21:22.536379] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.415 [2024-07-12 09:21:22.536397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.415 [2024-07-12 09:21:22.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.415 [2024-07-12 09:21:22.536428] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.415 [2024-07-12 09:21:22.536441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.415 [2024-07-12 09:21:22.536457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.415 09:21:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.415 09:21:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.415 09:21:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:36.415 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.673 09:21:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:36.930 09:21:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:36.930 09:21:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.930 09:21:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.08 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.08 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.08 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.08 2 00:15:49.131 remove_attach_helper took 45.08s to complete (handling 2 nvme drive(s)) 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:49.131 09:21:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74130 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74130 ']' 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74130 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74130 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:49.131 killing process with pid 74130 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74130' 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74130 00:15:49.131 09:21:35 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74130 00:15:51.033 09:21:37 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:51.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:51.856 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:51.856 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:51.856 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:52.115 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:52.115 00:15:52.115 real 2m31.377s 00:15:52.115 user 1m51.277s 00:15:52.115 sys 0m19.897s 00:15:52.115 09:21:38 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.115 ************************************ 00:15:52.115 09:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.115 END TEST sw_hotplug 00:15:52.115 ************************************ 00:15:52.115 09:21:38 -- common/autotest_common.sh@1142 -- # return 0 00:15:52.115 09:21:38 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:15:52.115 09:21:38 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:52.115 09:21:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:52.115 09:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.115 09:21:38 -- common/autotest_common.sh@10 -- # set +x 00:15:52.115 ************************************ 00:15:52.115 START TEST nvme_xnvme 00:15:52.115 ************************************ 00:15:52.115 09:21:38 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:52.115 * Looking for test storage... 00:15:52.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:52.115 09:21:38 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.115 09:21:38 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.115 09:21:38 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.115 09:21:38 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.115 09:21:38 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.115 09:21:38 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.115 09:21:38 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.115 09:21:38 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:52.115 09:21:38 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.115 09:21:38 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:52.115 09:21:38 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:52.115 09:21:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.115 09:21:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.115 ************************************ 00:15:52.115 START TEST xnvme_to_malloc_dd_copy 00:15:52.115 ************************************ 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:52.115 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:52.374 09:21:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:52.374 { 00:15:52.374 "subsystems": [ 00:15:52.374 { 00:15:52.374 "subsystem": "bdev", 00:15:52.374 "config": [ 00:15:52.374 { 00:15:52.374 "params": { 00:15:52.374 "block_size": 512, 00:15:52.374 "num_blocks": 2097152, 00:15:52.374 "name": "malloc0" 00:15:52.374 }, 00:15:52.374 "method": "bdev_malloc_create" 00:15:52.374 }, 00:15:52.374 { 00:15:52.374 "params": { 00:15:52.374 "io_mechanism": "libaio", 00:15:52.374 "filename": "/dev/nullb0", 00:15:52.374 "name": "null0" 00:15:52.374 }, 00:15:52.374 "method": "bdev_xnvme_create" 00:15:52.374 }, 00:15:52.374 { 00:15:52.374 "method": "bdev_wait_for_examine" 00:15:52.374 } 00:15:52.374 ] 00:15:52.374 } 00:15:52.374 ] 00:15:52.374 } 00:15:52.374 [2024-07-12 09:21:38.564254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:52.374 [2024-07-12 09:21:38.565047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75470 ] 00:15:52.632 [2024-07-12 09:21:38.737329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.632 [2024-07-12 09:21:38.924365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.314  Copying: 167/1024 [MB] (167 MBps) Copying: 333/1024 [MB] (165 MBps) Copying: 501/1024 [MB] (168 MBps) Copying: 668/1024 [MB] (166 MBps) Copying: 836/1024 [MB] (167 MBps) Copying: 1004/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 167 MBps) 00:16:04.314 00:16:04.314 09:21:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:04.314 09:21:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:04.314 09:21:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:04.314 09:21:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:04.314 { 00:16:04.314 "subsystems": [ 00:16:04.314 { 00:16:04.314 "subsystem": "bdev", 00:16:04.314 "config": [ 00:16:04.314 { 00:16:04.314 "params": { 00:16:04.314 "block_size": 512, 00:16:04.314 "num_blocks": 2097152, 00:16:04.314 "name": "malloc0" 00:16:04.314 }, 00:16:04.314 "method": "bdev_malloc_create" 00:16:04.314 }, 00:16:04.314 { 00:16:04.314 "params": { 00:16:04.314 "io_mechanism": "libaio", 00:16:04.314 "filename": "/dev/nullb0", 00:16:04.314 "name": "null0" 00:16:04.314 }, 00:16:04.314 "method": "bdev_xnvme_create" 00:16:04.314 }, 00:16:04.314 { 00:16:04.314 "method": "bdev_wait_for_examine" 00:16:04.314 } 00:16:04.314 ] 00:16:04.314 } 00:16:04.314 ] 00:16:04.314 } 00:16:04.314 [2024-07-12 09:21:49.886432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:04.314 [2024-07-12 09:21:49.886594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75596 ] 00:16:04.314 [2024-07-12 09:21:50.058837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.314 [2024-07-12 09:21:50.244165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.627  Copying: 174/1024 [MB] (174 MBps) Copying: 347/1024 [MB] (173 MBps) Copying: 521/1024 [MB] (173 MBps) Copying: 692/1024 [MB] (171 MBps) Copying: 866/1024 [MB] (173 MBps) Copying: 1024/1024 [MB] (average 173 MBps) 00:16:14.627 00:16:14.627 09:22:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:14.627 09:22:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:14.627 09:22:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:14.627 09:22:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:14.627 09:22:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:14.627 09:22:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:14.627 { 00:16:14.627 "subsystems": [ 00:16:14.627 { 00:16:14.627 "subsystem": "bdev", 00:16:14.627 "config": [ 00:16:14.627 { 00:16:14.627 "params": { 00:16:14.627 "block_size": 512, 00:16:14.627 "num_blocks": 2097152, 00:16:14.627 "name": "malloc0" 00:16:14.627 }, 00:16:14.627 "method": "bdev_malloc_create" 00:16:14.627 }, 00:16:14.627 { 00:16:14.627 "params": { 00:16:14.627 "io_mechanism": "io_uring", 00:16:14.627 "filename": "/dev/nullb0", 00:16:14.627 "name": "null0" 00:16:14.627 }, 00:16:14.627 "method": "bdev_xnvme_create" 00:16:14.627 }, 00:16:14.627 { 00:16:14.627 "method": "bdev_wait_for_examine" 00:16:14.627 } 00:16:14.627 ] 00:16:14.627 } 00:16:14.627 ] 00:16:14.627 } 00:16:14.885 [2024-07-12 09:22:00.992688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:14.885 [2024-07-12 09:22:00.992857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75717 ] 00:16:14.885 [2024-07-12 09:22:01.164745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.143 [2024-07-12 09:22:01.357014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.615  Copying: 176/1024 [MB] (176 MBps) Copying: 352/1024 [MB] (176 MBps) Copying: 531/1024 [MB] (178 MBps) Copying: 707/1024 [MB] (176 MBps) Copying: 884/1024 [MB] (176 MBps) Copying: 1024/1024 [MB] (average 177 MBps) 00:16:25.615 00:16:25.615 09:22:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:25.615 09:22:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:25.615 09:22:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:25.615 09:22:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:25.615 { 00:16:25.615 "subsystems": [ 00:16:25.615 { 00:16:25.615 "subsystem": "bdev", 00:16:25.615 "config": [ 00:16:25.615 { 00:16:25.615 "params": { 00:16:25.615 "block_size": 512, 00:16:25.615 "num_blocks": 2097152, 00:16:25.615 "name": "malloc0" 00:16:25.615 }, 00:16:25.615 "method": "bdev_malloc_create" 00:16:25.615 }, 00:16:25.615 { 00:16:25.615 "params": { 00:16:25.615 "io_mechanism": "io_uring", 00:16:25.615 "filename": "/dev/nullb0", 00:16:25.615 "name": "null0" 00:16:25.615 }, 00:16:25.615 "method": "bdev_xnvme_create" 00:16:25.615 }, 00:16:25.615 { 00:16:25.615 "method": "bdev_wait_for_examine" 00:16:25.615 } 00:16:25.615 ] 00:16:25.615 } 00:16:25.615 ] 00:16:25.615 } 00:16:25.615 [2024-07-12 09:22:11.925885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:25.615 [2024-07-12 09:22:11.926034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75843 ] 00:16:25.874 [2024-07-12 09:22:12.090374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.132 [2024-07-12 09:22:12.275322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.268  Copying: 181/1024 [MB] (181 MBps) Copying: 362/1024 [MB] (181 MBps) Copying: 539/1024 [MB] (177 MBps) Copying: 715/1024 [MB] (176 MBps) Copying: 894/1024 [MB] (179 MBps) Copying: 1024/1024 [MB] (average 178 MBps) 00:16:37.268 00:16:37.268 09:22:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:37.268 09:22:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:16:37.268 00:16:37.268 real 0m44.368s 00:16:37.268 user 0m38.840s 00:16:37.268 sys 0m4.934s 00:16:37.268 09:22:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.268 09:22:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:37.268 ************************************ 00:16:37.268 END TEST xnvme_to_malloc_dd_copy 00:16:37.268 ************************************ 00:16:37.268 09:22:22 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:37.268 09:22:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:37.268 09:22:22 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:37.268 09:22:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.268 09:22:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.268 ************************************ 00:16:37.268 START TEST xnvme_bdevperf 00:16:37.268 ************************************ 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:37.268 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:37.269 09:22:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:37.269 09:22:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:37.269 09:22:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:37.269 { 00:16:37.269 "subsystems": [ 00:16:37.269 { 00:16:37.269 "subsystem": "bdev", 00:16:37.269 "config": [ 00:16:37.269 { 00:16:37.269 "params": { 00:16:37.269 "io_mechanism": "libaio", 00:16:37.269 "filename": "/dev/nullb0", 00:16:37.269 "name": "null0" 00:16:37.269 }, 00:16:37.269 "method": "bdev_xnvme_create" 00:16:37.269 }, 00:16:37.269 { 00:16:37.269 "method": "bdev_wait_for_examine" 00:16:37.269 } 00:16:37.269 ] 00:16:37.269 } 00:16:37.269 ] 00:16:37.269 } 00:16:37.269 [2024-07-12 09:22:22.971029] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:37.269 [2024-07-12 09:22:22.971212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75987 ] 00:16:37.269 [2024-07-12 09:22:23.135594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.269 [2024-07-12 09:22:23.318704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.527 Running I/O for 5 seconds... 00:16:42.824 00:16:42.824 Latency(us) 00:16:42.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.824 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:42.824 null0 : 5.00 114157.53 445.93 0.00 0.00 557.20 168.49 703.77 00:16:42.824 =================================================================================================================== 00:16:42.824 Total : 114157.53 445.93 0.00 0.00 557.20 168.49 703.77 00:16:43.760 09:22:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:43.760 09:22:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:43.760 09:22:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:43.760 09:22:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:43.760 09:22:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:43.760 09:22:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:43.760 { 00:16:43.760 "subsystems": [ 00:16:43.760 { 00:16:43.760 "subsystem": "bdev", 00:16:43.760 "config": [ 00:16:43.760 { 00:16:43.760 "params": { 00:16:43.760 "io_mechanism": "io_uring", 00:16:43.760 "filename": "/dev/nullb0", 00:16:43.760 "name": "null0" 00:16:43.760 }, 00:16:43.760 "method": "bdev_xnvme_create" 00:16:43.760 }, 00:16:43.760 { 00:16:43.760 "method": "bdev_wait_for_examine" 00:16:43.760 } 00:16:43.760 ] 00:16:43.760 } 00:16:43.760 ] 00:16:43.760 } 00:16:43.760 [2024-07-12 09:22:29.856239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:43.760 [2024-07-12 09:22:29.856423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76064 ] 00:16:43.760 [2024-07-12 09:22:30.031612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.018 [2024-07-12 09:22:30.218445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.277 Running I/O for 5 seconds... 00:16:49.566 00:16:49.566 Latency(us) 00:16:49.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.566 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:49.566 null0 : 5.00 149619.10 584.45 0.00 0.00 424.36 256.93 759.62 00:16:49.566 =================================================================================================================== 00:16:49.566 Total : 149619.10 584.45 0.00 0.00 424.36 256.93 759.62 00:16:50.500 09:22:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:50.500 09:22:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:16:50.500 00:16:50.500 real 0m13.801s 00:16:50.500 user 0m10.773s 00:16:50.501 sys 0m2.795s 00:16:50.501 09:22:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.501 09:22:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.501 ************************************ 00:16:50.501 END TEST xnvme_bdevperf 00:16:50.501 ************************************ 00:16:50.501 09:22:36 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:16:50.501 ************************************ 00:16:50.501 END TEST nvme_xnvme 00:16:50.501 ************************************ 00:16:50.501 00:16:50.501 real 0m58.351s 00:16:50.501 user 0m49.673s 00:16:50.501 sys 0m7.845s 00:16:50.501 09:22:36 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.501 09:22:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.501 09:22:36 -- common/autotest_common.sh@1142 -- # return 0 00:16:50.501 09:22:36 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:50.501 09:22:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.501 09:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.501 09:22:36 -- common/autotest_common.sh@10 -- # set +x 00:16:50.501 ************************************ 00:16:50.501 START TEST blockdev_xnvme 00:16:50.501 ************************************ 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:50.501 * Looking for test storage... 00:16:50.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76206 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:50.501 09:22:36 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76206 00:16:50.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76206 ']' 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.501 09:22:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.759 [2024-07-12 09:22:36.948078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:50.759 [2024-07-12 09:22:36.948707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76206 ] 00:16:51.016 [2024-07-12 09:22:37.114020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.016 [2024-07-12 09:22:37.340359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.951 09:22:38 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.951 09:22:38 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:16:51.951 09:22:38 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:16:51.951 09:22:38 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:16:51.951 09:22:38 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:51.951 09:22:38 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:51.951 09:22:38 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:52.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.468 Waiting for block devices as requested 00:16:52.468 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.468 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.468 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.727 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:57.986 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:57.986 09:22:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:57.986 09:22:43 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:16:57.986 09:22:43 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:16:57.986 09:22:43 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:57.986 09:22:44 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:57.986 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:16:57.987 nvme0n1 00:16:57.987 nvme1n1 00:16:57.987 nvme2n1 00:16:57.987 nvme2n2 00:16:57.987 nvme2n3 00:16:57.987 nvme3n1 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e42ec397-80f3-4c39-b8d9-ceaf156b2153"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e42ec397-80f3-4c39-b8d9-ceaf156b2153",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "083a526a-5e8a-4b1d-ad24-47f47140c4d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "083a526a-5e8a-4b1d-ad24-47f47140c4d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f1e127c5-d7aa-4d04-97ef-fd4ee3b2ddbd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1e127c5-d7aa-4d04-97ef-fd4ee3b2ddbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "3f70b78d-859f-48d7-9974-dacd7afae60f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3f70b78d-859f-48d7-9974-dacd7afae60f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a71816d5-a10a-4b67-aafd-d755e79c9702"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a71816d5-a10a-4b67-aafd-d755e79c9702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "59307a39-81f9-4cc7-8d0c-d06a80f7553a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "59307a39-81f9-4cc7-8d0c-d06a80f7553a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:16:57.987 09:22:44 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 76206 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76206 ']' 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76206 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76206 00:16:57.987 killing process with pid 76206 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76206' 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76206 00:16:57.987 09:22:44 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76206 00:17:00.515 09:22:46 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:00.515 09:22:46 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:00.515 09:22:46 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:00.515 09:22:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.515 09:22:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.515 ************************************ 00:17:00.515 START TEST bdev_hello_world 00:17:00.515 ************************************ 00:17:00.515 09:22:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:00.515 [2024-07-12 09:22:46.473026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:00.515 [2024-07-12 09:22:46.473319] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76579 ] 00:17:00.515 [2024-07-12 09:22:46.655487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.773 [2024-07-12 09:22:46.876164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.031 [2024-07-12 09:22:47.264002] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:01.031 [2024-07-12 09:22:47.264063] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:01.031 [2024-07-12 09:22:47.264106] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:01.031 [2024-07-12 09:22:47.266360] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:01.031 [2024-07-12 09:22:47.266619] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:01.031 [2024-07-12 09:22:47.266651] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:01.031 [2024-07-12 09:22:47.266931] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:01.031 00:17:01.031 [2024-07-12 09:22:47.266977] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:02.404 00:17:02.404 real 0m2.008s 00:17:02.404 user 0m1.680s 00:17:02.404 sys 0m0.212s 00:17:02.404 09:22:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.404 09:22:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:02.404 ************************************ 00:17:02.404 END TEST bdev_hello_world 00:17:02.404 ************************************ 00:17:02.404 09:22:48 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:02.404 09:22:48 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:17:02.404 09:22:48 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.404 09:22:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.404 09:22:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.404 ************************************ 00:17:02.404 START TEST bdev_bounds 00:17:02.404 ************************************ 00:17:02.404 Process bdevio pid: 76623 00:17:02.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=76623 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 76623' 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 76623 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76623 ']' 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.404 09:22:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:02.404 [2024-07-12 09:22:48.556249] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:02.404 [2024-07-12 09:22:48.556494] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76623 ] 00:17:02.404 [2024-07-12 09:22:48.741652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.662 [2024-07-12 09:22:48.930843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.662 [2024-07-12 09:22:48.930943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.662 [2024-07-12 09:22:48.930955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.229 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.229 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:17:03.229 09:22:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:03.229 I/O targets: 00:17:03.229 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:03.229 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:03.229 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:03.229 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:03.229 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:03.229 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:03.229 00:17:03.229 00:17:03.229 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.229 http://cunit.sourceforge.net/ 00:17:03.229 00:17:03.229 00:17:03.229 Suite: bdevio tests on: nvme3n1 00:17:03.229 Test: blockdev write read block ...passed 00:17:03.229 Test: blockdev write zeroes read block ...passed 00:17:03.229 Test: blockdev write zeroes read no split ...passed 00:17:03.229 Test: blockdev write zeroes read split ...passed 00:17:03.513 Test: blockdev write zeroes read split partial ...passed 00:17:03.513 Test: blockdev reset ...passed 00:17:03.513 Test: blockdev write read 8 blocks ...passed 00:17:03.513 Test: blockdev write read size > 128k ...passed 00:17:03.513 Test: blockdev write read invalid size ...passed 00:17:03.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.513 Test: blockdev write read max offset ...passed 00:17:03.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.513 Test: blockdev writev readv 8 blocks ...passed 00:17:03.513 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.513 Test: blockdev writev readv block ...passed 00:17:03.513 Test: blockdev writev readv size > 128k ...passed 00:17:03.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.513 Test: blockdev comparev and writev ...passed 00:17:03.513 Test: blockdev nvme passthru rw ...passed 00:17:03.513 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.513 Test: blockdev nvme admin passthru ...passed 00:17:03.513 Test: blockdev copy ...passed 00:17:03.513 Suite: bdevio tests on: nvme2n3 00:17:03.513 Test: blockdev write read block ...passed 00:17:03.513 Test: blockdev write zeroes read block ...passed 00:17:03.513 Test: blockdev write zeroes read no split ...passed 00:17:03.513 Test: blockdev write zeroes read split ...passed 00:17:03.513 Test: blockdev write zeroes read split partial ...passed 00:17:03.513 Test: blockdev reset ...passed 00:17:03.513 Test: blockdev write read 8 blocks ...passed 00:17:03.513 Test: blockdev write read size > 128k ...passed 00:17:03.513 Test: blockdev write read invalid size ...passed 00:17:03.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.513 Test: blockdev write read max offset ...passed 00:17:03.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.513 Test: blockdev writev readv 8 blocks ...passed 00:17:03.513 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.513 Test: blockdev writev readv block ...passed 00:17:03.513 Test: blockdev writev readv size > 128k ...passed 00:17:03.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.513 Test: blockdev comparev and writev ...passed 00:17:03.513 Test: blockdev nvme passthru rw ...passed 00:17:03.513 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.513 Test: blockdev nvme admin passthru ...passed 00:17:03.513 Test: blockdev copy ...passed 00:17:03.513 Suite: bdevio tests on: nvme2n2 00:17:03.513 Test: blockdev write read block ...passed 00:17:03.513 Test: blockdev write zeroes read block ...passed 00:17:03.513 Test: blockdev write zeroes read no split ...passed 00:17:03.513 Test: blockdev write zeroes read split ...passed 00:17:03.513 Test: blockdev write zeroes read split partial ...passed 00:17:03.513 Test: blockdev reset ...passed 00:17:03.513 Test: blockdev write read 8 blocks ...passed 00:17:03.513 Test: blockdev write read size > 128k ...passed 00:17:03.513 Test: blockdev write read invalid size ...passed 00:17:03.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.513 Test: blockdev write read max offset ...passed 00:17:03.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.513 Test: blockdev writev readv 8 blocks ...passed 00:17:03.513 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.513 Test: blockdev writev readv block ...passed 00:17:03.513 Test: blockdev writev readv size > 128k ...passed 00:17:03.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.513 Test: blockdev comparev and writev ...passed 00:17:03.513 Test: blockdev nvme passthru rw ...passed 00:17:03.513 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.513 Test: blockdev nvme admin passthru ...passed 00:17:03.513 Test: blockdev copy ...passed 00:17:03.513 Suite: bdevio tests on: nvme2n1 00:17:03.513 Test: blockdev write read block ...passed 00:17:03.513 Test: blockdev write zeroes read block ...passed 00:17:03.513 Test: blockdev write zeroes read no split ...passed 00:17:03.513 Test: blockdev write zeroes read split ...passed 00:17:03.513 Test: blockdev write zeroes read split partial ...passed 00:17:03.513 Test: blockdev reset ...passed 00:17:03.513 Test: blockdev write read 8 blocks ...passed 00:17:03.513 Test: blockdev write read size > 128k ...passed 00:17:03.513 Test: blockdev write read invalid size ...passed 00:17:03.513 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.513 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.513 Test: blockdev write read max offset ...passed 00:17:03.513 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.513 Test: blockdev writev readv 8 blocks ...passed 00:17:03.513 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.513 Test: blockdev writev readv block ...passed 00:17:03.513 Test: blockdev writev readv size > 128k ...passed 00:17:03.513 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.513 Test: blockdev comparev and writev ...passed 00:17:03.513 Test: blockdev nvme passthru rw ...passed 00:17:03.513 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.513 Test: blockdev nvme admin passthru ...passed 00:17:03.513 Test: blockdev copy ...passed 00:17:03.513 Suite: bdevio tests on: nvme1n1 00:17:03.513 Test: blockdev write read block ...passed 00:17:03.513 Test: blockdev write zeroes read block ...passed 00:17:03.513 Test: blockdev write zeroes read no split ...passed 00:17:03.513 Test: blockdev write zeroes read split ...passed 00:17:03.771 Test: blockdev write zeroes read split partial ...passed 00:17:03.771 Test: blockdev reset ...passed 00:17:03.771 Test: blockdev write read 8 blocks ...passed 00:17:03.771 Test: blockdev write read size > 128k ...passed 00:17:03.771 Test: blockdev write read invalid size ...passed 00:17:03.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.771 Test: blockdev write read max offset ...passed 00:17:03.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.771 Test: blockdev writev readv 8 blocks ...passed 00:17:03.771 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.771 Test: blockdev writev readv block ...passed 00:17:03.771 Test: blockdev writev readv size > 128k ...passed 00:17:03.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.771 Test: blockdev comparev and writev ...passed 00:17:03.771 Test: blockdev nvme passthru rw ...passed 00:17:03.771 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.771 Test: blockdev nvme admin passthru ...passed 00:17:03.771 Test: blockdev copy ...passed 00:17:03.771 Suite: bdevio tests on: nvme0n1 00:17:03.771 Test: blockdev write read block ...passed 00:17:03.771 Test: blockdev write zeroes read block ...passed 00:17:03.771 Test: blockdev write zeroes read no split ...passed 00:17:03.771 Test: blockdev write zeroes read split ...passed 00:17:03.771 Test: blockdev write zeroes read split partial ...passed 00:17:03.771 Test: blockdev reset ...passed 00:17:03.771 Test: blockdev write read 8 blocks ...passed 00:17:03.771 Test: blockdev write read size > 128k ...passed 00:17:03.771 Test: blockdev write read invalid size ...passed 00:17:03.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.771 Test: blockdev write read max offset ...passed 00:17:03.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.771 Test: blockdev writev readv 8 blocks ...passed 00:17:03.771 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.771 Test: blockdev writev readv block ...passed 00:17:03.771 Test: blockdev writev readv size > 128k ...passed 00:17:03.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.771 Test: blockdev comparev and writev ...passed 00:17:03.771 Test: blockdev nvme passthru rw ...passed 00:17:03.771 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.771 Test: blockdev nvme admin passthru ...passed 00:17:03.771 Test: blockdev copy ...passed 00:17:03.771 00:17:03.771 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.771 suites 6 6 n/a 0 0 00:17:03.771 tests 138 138 138 0 0 00:17:03.771 asserts 780 780 780 0 n/a 00:17:03.771 00:17:03.771 Elapsed time = 1.139 seconds 00:17:03.771 0 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 76623 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76623 ']' 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76623 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76623 00:17:03.771 killing process with pid 76623 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76623' 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76623 00:17:03.771 09:22:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76623 00:17:05.148 ************************************ 00:17:05.148 END TEST bdev_bounds 00:17:05.148 ************************************ 00:17:05.148 09:22:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:17:05.148 00:17:05.148 real 0m2.667s 00:17:05.148 user 0m6.183s 00:17:05.148 sys 0m0.366s 00:17:05.148 09:22:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.148 09:22:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:05.148 09:22:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:05.148 09:22:51 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:05.148 09:22:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:05.148 09:22:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.148 09:22:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.148 ************************************ 00:17:05.148 START TEST bdev_nbd 00:17:05.148 ************************************ 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76680 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76680 /var/tmp/spdk-nbd.sock 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76680 ']' 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.148 09:22:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:05.148 [2024-07-12 09:22:51.249667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:05.148 [2024-07-12 09:22:51.249798] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.148 [2024-07-12 09:22:51.417839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.407 [2024-07-12 09:22:51.606640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:05.975 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.259 1+0 records in 00:17:06.259 1+0 records out 00:17:06.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817636 s, 5.0 MB/s 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.259 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.517 1+0 records in 00:17:06.517 1+0 records out 00:17:06.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507106 s, 8.1 MB/s 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.517 09:22:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.776 1+0 records in 00:17:06.776 1+0 records out 00:17:06.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551638 s, 7.4 MB/s 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.776 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.035 1+0 records in 00:17:07.035 1+0 records out 00:17:07.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106142 s, 3.9 MB/s 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:07.035 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.601 1+0 records in 00:17:07.601 1+0 records out 00:17:07.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933622 s, 4.4 MB/s 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:07.601 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.860 1+0 records in 00:17:07.860 1+0 records out 00:17:07.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767188 s, 5.3 MB/s 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:07.860 09:22:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd0", 00:17:08.118 "bdev_name": "nvme0n1" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd1", 00:17:08.118 "bdev_name": "nvme1n1" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd2", 00:17:08.118 "bdev_name": "nvme2n1" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd3", 00:17:08.118 "bdev_name": "nvme2n2" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd4", 00:17:08.118 "bdev_name": "nvme2n3" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd5", 00:17:08.118 "bdev_name": "nvme3n1" 00:17:08.118 } 00:17:08.118 ]' 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd0", 00:17:08.118 "bdev_name": "nvme0n1" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd1", 00:17:08.118 "bdev_name": "nvme1n1" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd2", 00:17:08.118 "bdev_name": "nvme2n1" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd3", 00:17:08.118 "bdev_name": "nvme2n2" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd4", 00:17:08.118 "bdev_name": "nvme2n3" 00:17:08.118 }, 00:17:08.118 { 00:17:08.118 "nbd_device": "/dev/nbd5", 00:17:08.118 "bdev_name": "nvme3n1" 00:17:08.118 } 00:17:08.118 ]' 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.118 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.377 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.634 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.635 09:22:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:08.892 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:08.892 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.893 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.151 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.409 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.666 09:22:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:09.923 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:09.924 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:10.181 /dev/nbd0 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.181 1+0 records in 00:17:10.181 1+0 records out 00:17:10.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515695 s, 7.9 MB/s 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:10.181 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:17:10.439 /dev/nbd1 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.439 1+0 records in 00:17:10.439 1+0 records out 00:17:10.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604096 s, 6.8 MB/s 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:10.439 09:22:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:17:10.697 /dev/nbd10 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.697 1+0 records in 00:17:10.697 1+0 records out 00:17:10.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613607 s, 6.7 MB/s 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:10.697 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:17:10.955 /dev/nbd11 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.213 1+0 records in 00:17:11.213 1+0 records out 00:17:11.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566143 s, 7.2 MB/s 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:11.213 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:17:11.471 /dev/nbd12 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.471 1+0 records in 00:17:11.471 1+0 records out 00:17:11.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810985 s, 5.1 MB/s 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:11.471 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:11.471 /dev/nbd13 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.731 1+0 records in 00:17:11.731 1+0 records out 00:17:11.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825502 s, 5.0 MB/s 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:11.731 09:22:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:11.989 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd0", 00:17:11.989 "bdev_name": "nvme0n1" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd1", 00:17:11.989 "bdev_name": "nvme1n1" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd10", 00:17:11.989 "bdev_name": "nvme2n1" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd11", 00:17:11.989 "bdev_name": "nvme2n2" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd12", 00:17:11.989 "bdev_name": "nvme2n3" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd13", 00:17:11.989 "bdev_name": "nvme3n1" 00:17:11.989 } 00:17:11.989 ]' 00:17:11.989 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd0", 00:17:11.989 "bdev_name": "nvme0n1" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd1", 00:17:11.989 "bdev_name": "nvme1n1" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd10", 00:17:11.989 "bdev_name": "nvme2n1" 00:17:11.989 }, 00:17:11.989 { 00:17:11.989 "nbd_device": "/dev/nbd11", 00:17:11.989 "bdev_name": "nvme2n2" 00:17:11.989 }, 00:17:11.990 { 00:17:11.990 "nbd_device": "/dev/nbd12", 00:17:11.990 "bdev_name": "nvme2n3" 00:17:11.990 }, 00:17:11.990 { 00:17:11.990 "nbd_device": "/dev/nbd13", 00:17:11.990 "bdev_name": "nvme3n1" 00:17:11.990 } 00:17:11.990 ]' 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:11.990 /dev/nbd1 00:17:11.990 /dev/nbd10 00:17:11.990 /dev/nbd11 00:17:11.990 /dev/nbd12 00:17:11.990 /dev/nbd13' 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:11.990 /dev/nbd1 00:17:11.990 /dev/nbd10 00:17:11.990 /dev/nbd11 00:17:11.990 /dev/nbd12 00:17:11.990 /dev/nbd13' 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:11.990 256+0 records in 00:17:11.990 256+0 records out 00:17:11.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00970891 s, 108 MB/s 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:11.990 256+0 records in 00:17:11.990 256+0 records out 00:17:11.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167005 s, 6.3 MB/s 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:11.990 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:12.248 256+0 records in 00:17:12.248 256+0 records out 00:17:12.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164591 s, 6.4 MB/s 00:17:12.249 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:12.249 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:12.507 256+0 records in 00:17:12.507 256+0 records out 00:17:12.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145104 s, 7.2 MB/s 00:17:12.507 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:12.507 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:12.507 256+0 records in 00:17:12.507 256+0 records out 00:17:12.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137071 s, 7.6 MB/s 00:17:12.507 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:12.507 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:12.766 256+0 records in 00:17:12.766 256+0 records out 00:17:12.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163534 s, 6.4 MB/s 00:17:12.766 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:12.766 09:22:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:13.024 256+0 records in 00:17:13.024 256+0 records out 00:17:13.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162818 s, 6.4 MB/s 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.025 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.290 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.551 09:22:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.809 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.067 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.325 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:14.583 09:23:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:14.840 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:14.840 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:14.840 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:14.840 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:17:15.098 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:15.098 malloc_lvol_verify 00:17:15.356 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:15.613 f8d2420b-a064-40c4-a184-1dc22c6ee14a 00:17:15.613 09:23:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:15.872 73691a78-8737-43f3-aab5-76fdba788ad0 00:17:15.872 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:16.130 /dev/nbd0 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:17:16.130 mke2fs 1.46.5 (30-Dec-2021) 00:17:16.130 Discarding device blocks: 0/4096 done 00:17:16.130 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:16.130 00:17:16.130 Allocating group tables: 0/1 done 00:17:16.130 Writing inode tables: 0/1 done 00:17:16.130 Creating journal (1024 blocks): done 00:17:16.130 Writing superblocks and filesystem accounting information: 0/1 done 00:17:16.130 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.130 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:16.387 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.387 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76680 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76680 ']' 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76680 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76680 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.388 killing process with pid 76680 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76680' 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76680 00:17:16.388 09:23:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76680 00:17:17.762 09:23:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:17:17.762 00:17:17.762 real 0m12.652s 00:17:17.762 user 0m17.934s 00:17:17.762 sys 0m4.049s 00:17:17.762 09:23:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.762 09:23:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 ************************************ 00:17:17.762 END TEST bdev_nbd 00:17:17.762 ************************************ 00:17:17.762 09:23:03 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:17.762 09:23:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:17:17.762 09:23:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:17:17.762 09:23:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:17:17.762 09:23:03 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:17:17.762 09:23:03 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.762 09:23:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.762 09:23:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 ************************************ 00:17:17.762 START TEST bdev_fio 00:17:17.762 ************************************ 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:17.762 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 ************************************ 00:17:17.762 START TEST bdev_fio_rw_verify 00:17:17.762 ************************************ 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:17.762 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:17.763 09:23:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:18.021 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:18.021 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:18.021 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:18.021 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:18.021 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:18.021 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:18.021 fio-3.35 00:17:18.021 Starting 6 threads 00:17:30.224 00:17:30.224 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77110: Fri Jul 12 09:23:15 2024 00:17:30.224 read: IOPS=28.8k, BW=113MiB/s (118MB/s)(1127MiB/10001msec) 00:17:30.224 slat (usec): min=3, max=1028, avg= 6.95, stdev= 4.79 00:17:30.224 clat (usec): min=133, max=7089, avg=650.84, stdev=275.92 00:17:30.224 lat (usec): min=136, max=7098, avg=657.79, stdev=276.48 00:17:30.224 clat percentiles (usec): 00:17:30.224 | 50.000th=[ 660], 99.000th=[ 1254], 99.900th=[ 3523], 99.990th=[ 6652], 00:17:30.224 | 99.999th=[ 7111] 00:17:30.224 write: IOPS=29.1k, BW=114MiB/s (119MB/s)(1136MiB/10001msec); 0 zone resets 00:17:30.224 slat (usec): min=10, max=3439, avg=27.07, stdev=25.32 00:17:30.224 clat (usec): min=109, max=7324, avg=713.48, stdev=266.45 00:17:30.224 lat (usec): min=136, max=7500, avg=740.55, stdev=268.75 00:17:30.224 clat percentiles (usec): 00:17:30.224 | 50.000th=[ 717], 99.000th=[ 1385], 99.900th=[ 2900], 99.990th=[ 5276], 00:17:30.224 | 99.999th=[ 7242] 00:17:30.224 bw ( KiB/s): min=97975, max=141328, per=100.00%, avg=116753.47, stdev=2207.69, samples=114 00:17:30.224 iops : min=24493, max=35332, avg=29188.11, stdev=551.92, samples=114 00:17:30.224 lat (usec) : 250=2.81%, 500=21.01%, 750=37.66%, 1000=31.49% 00:17:30.224 lat (msec) : 2=6.78%, 4=0.21%, 10=0.05% 00:17:30.224 cpu : usr=61.69%, sys=25.35%, ctx=7732, majf=0, minf=24520 00:17:30.224 IO depths : 1=12.2%, 2=24.8%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:30.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.224 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.224 issued rwts: total=288409,290731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.224 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:30.224 00:17:30.224 Run status group 0 (all jobs): 00:17:30.224 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1127MiB (1181MB), run=10001-10001msec 00:17:30.224 WRITE: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=1136MiB (1191MB), run=10001-10001msec 00:17:30.224 ----------------------------------------------------- 00:17:30.224 Suppressions used: 00:17:30.224 count bytes template 00:17:30.224 6 48 /usr/src/fio/parse.c 00:17:30.224 2120 203520 /usr/src/fio/iolog.c 00:17:30.224 1 8 libtcmalloc_minimal.so 00:17:30.224 1 904 libcrypto.so 00:17:30.224 ----------------------------------------------------- 00:17:30.224 00:17:30.224 00:17:30.224 real 0m12.397s 00:17:30.224 user 0m38.943s 00:17:30.224 sys 0m15.545s 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:30.224 ************************************ 00:17:30.224 END TEST bdev_fio_rw_verify 00:17:30.224 ************************************ 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:30.224 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e42ec397-80f3-4c39-b8d9-ceaf156b2153"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e42ec397-80f3-4c39-b8d9-ceaf156b2153",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "083a526a-5e8a-4b1d-ad24-47f47140c4d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "083a526a-5e8a-4b1d-ad24-47f47140c4d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f1e127c5-d7aa-4d04-97ef-fd4ee3b2ddbd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1e127c5-d7aa-4d04-97ef-fd4ee3b2ddbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "3f70b78d-859f-48d7-9974-dacd7afae60f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3f70b78d-859f-48d7-9974-dacd7afae60f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a71816d5-a10a-4b67-aafd-d755e79c9702"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a71816d5-a10a-4b67-aafd-d755e79c9702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "59307a39-81f9-4cc7-8d0c-d06a80f7553a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "59307a39-81f9-4cc7-8d0c-d06a80f7553a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:30.225 /home/vagrant/spdk_repo/spdk 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:17:30.225 00:17:30.225 real 0m12.573s 00:17:30.225 user 0m39.050s 00:17:30.225 sys 0m15.616s 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.225 09:23:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:30.225 ************************************ 00:17:30.225 END TEST bdev_fio 00:17:30.225 ************************************ 00:17:30.225 09:23:16 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:30.225 09:23:16 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:30.225 09:23:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:30.225 09:23:16 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:30.225 09:23:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.225 09:23:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:30.225 ************************************ 00:17:30.225 START TEST bdev_verify 00:17:30.225 ************************************ 00:17:30.225 09:23:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:30.225 [2024-07-12 09:23:16.571846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:30.225 [2024-07-12 09:23:16.572017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77277 ] 00:17:30.484 [2024-07-12 09:23:16.739134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:30.742 [2024-07-12 09:23:16.952620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.742 [2024-07-12 09:23:16.952636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.308 Running I/O for 5 seconds... 00:17:36.620 00:17:36.620 Latency(us) 00:17:36.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.620 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x0 length 0xa0000 00:17:36.620 nvme0n1 : 5.04 1651.94 6.45 0.00 0.00 77347.45 8996.31 85315.96 00:17:36.620 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0xa0000 length 0xa0000 00:17:36.620 nvme0n1 : 5.03 1601.75 6.26 0.00 0.00 79760.85 7566.43 97231.59 00:17:36.620 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x0 length 0xbd0bd 00:17:36.620 nvme1n1 : 5.05 2763.00 10.79 0.00 0.00 46006.75 5540.77 65297.69 00:17:36.620 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:36.620 nvme1n1 : 5.05 2696.83 10.53 0.00 0.00 47219.53 5302.46 73400.32 00:17:36.620 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x0 length 0x80000 00:17:36.620 nvme2n1 : 5.04 1651.23 6.45 0.00 0.00 77010.53 10545.34 68634.07 00:17:36.620 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x80000 length 0x80000 00:17:36.620 nvme2n1 : 5.04 1599.69 6.25 0.00 0.00 79558.46 12034.79 84839.33 00:17:36.620 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x0 length 0x80000 00:17:36.620 nvme2n2 : 5.06 1669.99 6.52 0.00 0.00 75958.57 4766.25 81026.33 00:17:36.620 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x80000 length 0x80000 00:17:36.620 nvme2n2 : 5.06 1594.41 6.23 0.00 0.00 79644.54 12034.79 84362.71 00:17:36.620 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x0 length 0x80000 00:17:36.620 nvme2n3 : 5.06 1669.29 6.52 0.00 0.00 75851.57 5719.51 95801.72 00:17:36.620 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x80000 length 0x80000 00:17:36.620 nvme2n3 : 5.06 1593.58 6.22 0.00 0.00 79503.95 15847.80 84839.33 00:17:36.620 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x0 length 0x20000 00:17:36.620 nvme3n1 : 5.06 1670.61 6.53 0.00 0.00 75660.67 5064.15 99614.72 00:17:36.620 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.620 Verification LBA range: start 0x20000 length 0x20000 00:17:36.620 nvme3n1 : 5.07 1616.43 6.31 0.00 0.00 78231.29 1064.96 93418.59 00:17:36.620 =================================================================================================================== 00:17:36.620 Total : 21778.74 85.07 0.00 0.00 69994.45 1064.96 99614.72 00:17:37.552 00:17:37.552 real 0m7.143s 00:17:37.552 user 0m11.200s 00:17:37.552 sys 0m1.684s 00:17:37.552 09:23:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.552 09:23:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:37.552 ************************************ 00:17:37.552 END TEST bdev_verify 00:17:37.552 ************************************ 00:17:37.552 09:23:23 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:37.552 09:23:23 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:37.552 09:23:23 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:37.552 09:23:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.552 09:23:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.552 ************************************ 00:17:37.552 START TEST bdev_verify_big_io 00:17:37.552 ************************************ 00:17:37.552 09:23:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:37.552 [2024-07-12 09:23:23.793154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:37.552 [2024-07-12 09:23:23.793370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77381 ] 00:17:37.810 [2024-07-12 09:23:23.988092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:38.066 [2024-07-12 09:23:24.247076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.066 [2024-07-12 09:23:24.247091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.630 Running I/O for 5 seconds... 00:17:45.193 00:17:45.193 Latency(us) 00:17:45.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.193 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x0 length 0xa000 00:17:45.193 nvme0n1 : 6.03 116.77 7.30 0.00 0.00 1061883.60 174444.92 1060015.01 00:17:45.193 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0xa000 length 0xa000 00:17:45.193 nvme0n1 : 5.96 118.04 7.38 0.00 0.00 1067503.24 14894.55 1159153.11 00:17:45.193 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x0 length 0xbd0b 00:17:45.193 nvme1n1 : 6.06 117.21 7.33 0.00 0.00 1039557.67 8340.95 1937005.85 00:17:45.193 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:45.193 nvme1n1 : 5.97 142.10 8.88 0.00 0.00 854528.49 9592.09 1570957.50 00:17:45.193 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x0 length 0x8000 00:17:45.193 nvme2n1 : 6.06 92.36 5.77 0.00 0.00 1265404.20 46470.98 1860745.77 00:17:45.193 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x8000 length 0x8000 00:17:45.193 nvme2n1 : 5.95 100.79 6.30 0.00 0.00 1174600.69 14000.87 2791118.66 00:17:45.193 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x0 length 0x8000 00:17:45.193 nvme2n2 : 6.06 142.54 8.91 0.00 0.00 803909.16 12451.84 1128649.08 00:17:45.193 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x8000 length 0x8000 00:17:45.193 nvme2n2 : 5.98 116.47 7.28 0.00 0.00 995553.13 20375.74 1265917.21 00:17:45.193 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x0 length 0x8000 00:17:45.193 nvme2n3 : 6.05 118.99 7.44 0.00 0.00 930927.30 11319.85 1418437.35 00:17:45.193 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x8000 length 0x8000 00:17:45.193 nvme2n3 : 5.97 139.32 8.71 0.00 0.00 800217.19 95325.09 957063.91 00:17:45.193 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x0 length 0x2000 00:17:45.193 nvme3n1 : 6.05 92.50 5.78 0.00 0.00 1159476.10 9889.98 3096158.95 00:17:45.193 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:45.193 Verification LBA range: start 0x2000 length 0x2000 00:17:45.193 nvme3n1 : 5.98 93.67 5.85 0.00 0.00 1165163.80 14239.19 3172419.03 00:17:45.193 =================================================================================================================== 00:17:45.193 Total : 1390.76 86.92 0.00 0.00 1005088.59 8340.95 3172419.03 00:17:46.130 00:17:46.130 real 0m8.596s 00:17:46.130 user 0m15.359s 00:17:46.130 sys 0m0.534s 00:17:46.130 09:23:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:46.130 ************************************ 00:17:46.130 END TEST bdev_verify_big_io 00:17:46.130 09:23:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.130 ************************************ 00:17:46.130 09:23:32 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:46.130 09:23:32 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.130 09:23:32 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:46.130 09:23:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.130 09:23:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.130 ************************************ 00:17:46.130 START TEST bdev_write_zeroes 00:17:46.130 ************************************ 00:17:46.130 09:23:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.130 [2024-07-12 09:23:32.407063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:46.130 [2024-07-12 09:23:32.407286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77498 ] 00:17:46.389 [2024-07-12 09:23:32.578699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.646 [2024-07-12 09:23:32.842057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.211 Running I/O for 1 seconds... 00:17:48.146 00:17:48.146 Latency(us) 00:17:48.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.146 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.146 nvme0n1 : 1.01 10677.92 41.71 0.00 0.00 11972.98 7804.74 21448.15 00:17:48.146 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.146 nvme1n1 : 1.01 14853.39 58.02 0.00 0.00 8575.76 4736.47 15252.01 00:17:48.146 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.146 nvme2n1 : 1.01 10650.98 41.61 0.00 0.00 11950.34 5242.88 21448.15 00:17:48.146 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.146 nvme2n2 : 1.01 10634.43 41.54 0.00 0.00 11956.46 5183.30 22401.40 00:17:48.146 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.146 nvme2n3 : 1.02 10679.95 41.72 0.00 0.00 11897.29 5272.67 23354.65 00:17:48.146 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:48.146 nvme3n1 : 1.02 10663.88 41.66 0.00 0.00 11905.41 5362.04 24307.90 00:17:48.146 =================================================================================================================== 00:17:48.146 Total : 68160.53 266.25 0.00 0.00 11203.20 4736.47 24307.90 00:17:49.520 00:17:49.520 real 0m3.179s 00:17:49.520 user 0m2.455s 00:17:49.520 sys 0m0.555s 00:17:49.520 09:23:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.520 09:23:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:49.520 ************************************ 00:17:49.520 END TEST bdev_write_zeroes 00:17:49.520 ************************************ 00:17:49.520 09:23:35 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:17:49.520 09:23:35 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.520 09:23:35 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:49.520 09:23:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.520 09:23:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:49.520 ************************************ 00:17:49.520 START TEST bdev_json_nonenclosed 00:17:49.520 ************************************ 00:17:49.520 09:23:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.520 [2024-07-12 09:23:35.631947] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:49.520 [2024-07-12 09:23:35.632125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77556 ] 00:17:49.520 [2024-07-12 09:23:35.805425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.778 [2024-07-12 09:23:36.034326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.778 [2024-07-12 09:23:36.034445] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:49.778 [2024-07-12 09:23:36.034475] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:49.778 [2024-07-12 09:23:36.034497] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.345 00:17:50.345 real 0m0.970s 00:17:50.345 user 0m0.725s 00:17:50.345 sys 0m0.136s 00:17:50.345 09:23:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:17:50.345 09:23:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.345 09:23:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:50.345 ************************************ 00:17:50.345 END TEST bdev_json_nonenclosed 00:17:50.345 ************************************ 00:17:50.345 09:23:36 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:17:50.345 09:23:36 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:17:50.345 09:23:36 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.345 09:23:36 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:50.345 09:23:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.345 09:23:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:50.345 ************************************ 00:17:50.345 START TEST bdev_json_nonarray 00:17:50.345 ************************************ 00:17:50.345 09:23:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.345 [2024-07-12 09:23:36.640839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:50.345 [2024-07-12 09:23:36.641005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77583 ] 00:17:50.604 [2024-07-12 09:23:36.815511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.862 [2024-07-12 09:23:37.045066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.862 [2024-07-12 09:23:37.045198] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:50.862 [2024-07-12 09:23:37.045228] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:50.862 [2024-07-12 09:23:37.045245] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.120 00:17:51.120 real 0m0.911s 00:17:51.120 user 0m0.675s 00:17:51.120 sys 0m0.129s 00:17:51.120 09:23:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:17:51.120 09:23:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.120 09:23:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:51.120 ************************************ 00:17:51.120 END TEST bdev_json_nonarray 00:17:51.120 ************************************ 00:17:51.378 09:23:37 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:51.378 09:23:37 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:51.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:01.941 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:01.941 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:01.941 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:01.941 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:01.941 00:18:01.941 real 1m10.863s 00:18:01.941 user 1m46.032s 00:18:01.941 sys 0m37.814s 00:18:01.941 09:23:47 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:01.941 ************************************ 00:18:01.941 END TEST blockdev_xnvme 00:18:01.941 ************************************ 00:18:01.941 09:23:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:01.941 09:23:47 -- common/autotest_common.sh@1142 -- # return 0 00:18:01.941 09:23:47 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:01.941 09:23:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:01.941 09:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.941 09:23:47 -- common/autotest_common.sh@10 -- # set +x 00:18:01.941 ************************************ 00:18:01.941 START TEST ublk 00:18:01.941 ************************************ 00:18:01.941 09:23:47 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:01.941 * Looking for test storage... 00:18:01.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:01.941 09:23:47 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:01.941 09:23:47 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:01.941 09:23:47 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:01.941 09:23:47 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:01.941 09:23:47 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:01.941 09:23:47 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:01.941 09:23:47 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:01.941 09:23:47 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:01.941 09:23:47 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:01.941 09:23:47 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:01.941 09:23:47 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.941 09:23:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:01.941 ************************************ 00:18:01.941 START TEST test_save_ublk_config 00:18:01.941 ************************************ 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77885 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77885 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77885 ']' 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.941 09:23:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:01.941 [2024-07-12 09:23:47.873421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:01.941 [2024-07-12 09:23:47.873575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77885 ] 00:18:01.941 [2024-07-12 09:23:48.038121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.200 [2024-07-12 09:23:48.302461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.767 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.767 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:02.767 09:23:49 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:02.767 09:23:49 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:02.767 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.767 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:02.767 [2024-07-12 09:23:49.036218] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:02.767 [2024-07-12 09:23:49.037310] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:02.767 malloc0 00:18:02.767 [2024-07-12 09:23:49.116362] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:02.767 [2024-07-12 09:23:49.116479] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:02.767 [2024-07-12 09:23:49.116497] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:02.767 [2024-07-12 09:23:49.116510] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:03.026 [2024-07-12 09:23:49.125380] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:03.026 [2024-07-12 09:23:49.125436] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:03.026 [2024-07-12 09:23:49.132240] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:03.026 [2024-07-12 09:23:49.132406] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:03.026 [2024-07-12 09:23:49.148221] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:03.026 0 00:18:03.026 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.026 09:23:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:03.026 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.026 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:03.284 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.284 09:23:49 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:03.284 "subsystems": [ 00:18:03.284 { 00:18:03.284 "subsystem": "keyring", 00:18:03.284 "config": [] 00:18:03.284 }, 00:18:03.284 { 00:18:03.285 "subsystem": "iobuf", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "iobuf_set_options", 00:18:03.285 "params": { 00:18:03.285 "small_pool_count": 8192, 00:18:03.285 "large_pool_count": 1024, 00:18:03.285 "small_bufsize": 8192, 00:18:03.285 "large_bufsize": 135168 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "sock", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "sock_set_default_impl", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "posix" 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "sock_impl_set_options", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "ssl", 00:18:03.285 "recv_buf_size": 4096, 00:18:03.285 "send_buf_size": 4096, 00:18:03.285 "enable_recv_pipe": true, 00:18:03.285 "enable_quickack": false, 00:18:03.285 "enable_placement_id": 0, 00:18:03.285 "enable_zerocopy_send_server": true, 00:18:03.285 "enable_zerocopy_send_client": false, 00:18:03.285 "zerocopy_threshold": 0, 00:18:03.285 "tls_version": 0, 00:18:03.285 "enable_ktls": false 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "sock_impl_set_options", 00:18:03.285 "params": { 00:18:03.285 "impl_name": "posix", 00:18:03.285 "recv_buf_size": 2097152, 00:18:03.285 "send_buf_size": 2097152, 00:18:03.285 "enable_recv_pipe": true, 00:18:03.285 "enable_quickack": false, 00:18:03.285 "enable_placement_id": 0, 00:18:03.285 "enable_zerocopy_send_server": true, 00:18:03.285 "enable_zerocopy_send_client": false, 00:18:03.285 "zerocopy_threshold": 0, 00:18:03.285 "tls_version": 0, 00:18:03.285 "enable_ktls": false 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "vmd", 00:18:03.285 "config": [] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "accel", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "accel_set_options", 00:18:03.285 "params": { 00:18:03.285 "small_cache_size": 128, 00:18:03.285 "large_cache_size": 16, 00:18:03.285 "task_count": 2048, 00:18:03.285 "sequence_count": 2048, 00:18:03.285 "buf_count": 2048 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "bdev", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "bdev_set_options", 00:18:03.285 "params": { 00:18:03.285 "bdev_io_pool_size": 65535, 00:18:03.285 "bdev_io_cache_size": 256, 00:18:03.285 "bdev_auto_examine": true, 00:18:03.285 "iobuf_small_cache_size": 128, 00:18:03.285 "iobuf_large_cache_size": 16 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_raid_set_options", 00:18:03.285 "params": { 00:18:03.285 "process_window_size_kb": 1024 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_iscsi_set_options", 00:18:03.285 "params": { 00:18:03.285 "timeout_sec": 30 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_nvme_set_options", 00:18:03.285 "params": { 00:18:03.285 "action_on_timeout": "none", 00:18:03.285 "timeout_us": 0, 00:18:03.285 "timeout_admin_us": 0, 00:18:03.285 "keep_alive_timeout_ms": 10000, 00:18:03.285 "arbitration_burst": 0, 00:18:03.285 "low_priority_weight": 0, 00:18:03.285 "medium_priority_weight": 0, 00:18:03.285 "high_priority_weight": 0, 00:18:03.285 "nvme_adminq_poll_period_us": 10000, 00:18:03.285 "nvme_ioq_poll_period_us": 0, 00:18:03.285 "io_queue_requests": 0, 00:18:03.285 "delay_cmd_submit": true, 00:18:03.285 "transport_retry_count": 4, 00:18:03.285 "bdev_retry_count": 3, 00:18:03.285 "transport_ack_timeout": 0, 00:18:03.285 "ctrlr_loss_timeout_sec": 0, 00:18:03.285 "reconnect_delay_sec": 0, 00:18:03.285 "fast_io_fail_timeout_sec": 0, 00:18:03.285 "disable_auto_failback": false, 00:18:03.285 "generate_uuids": false, 00:18:03.285 "transport_tos": 0, 00:18:03.285 "nvme_error_stat": false, 00:18:03.285 "rdma_srq_size": 0, 00:18:03.285 "io_path_stat": false, 00:18:03.285 "allow_accel_sequence": false, 00:18:03.285 "rdma_max_cq_size": 0, 00:18:03.285 "rdma_cm_event_timeout_ms": 0, 00:18:03.285 "dhchap_digests": [ 00:18:03.285 "sha256", 00:18:03.285 "sha384", 00:18:03.285 "sha512" 00:18:03.285 ], 00:18:03.285 "dhchap_dhgroups": [ 00:18:03.285 "null", 00:18:03.285 "ffdhe2048", 00:18:03.285 "ffdhe3072", 00:18:03.285 "ffdhe4096", 00:18:03.285 "ffdhe6144", 00:18:03.285 "ffdhe8192" 00:18:03.285 ] 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_nvme_set_hotplug", 00:18:03.285 "params": { 00:18:03.285 "period_us": 100000, 00:18:03.285 "enable": false 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_malloc_create", 00:18:03.285 "params": { 00:18:03.285 "name": "malloc0", 00:18:03.285 "num_blocks": 8192, 00:18:03.285 "block_size": 4096, 00:18:03.285 "physical_block_size": 4096, 00:18:03.285 "uuid": "1a510dd0-77cf-48d8-b5a2-cc7dd3e236b4", 00:18:03.285 "optimal_io_boundary": 0 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "bdev_wait_for_examine" 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "scsi", 00:18:03.285 "config": null 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "scheduler", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "framework_set_scheduler", 00:18:03.285 "params": { 00:18:03.285 "name": "static" 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "vhost_scsi", 00:18:03.285 "config": [] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "vhost_blk", 00:18:03.285 "config": [] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "ublk", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "ublk_create_target", 00:18:03.285 "params": { 00:18:03.285 "cpumask": "1" 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "ublk_start_disk", 00:18:03.285 "params": { 00:18:03.285 "bdev_name": "malloc0", 00:18:03.285 "ublk_id": 0, 00:18:03.285 "num_queues": 1, 00:18:03.285 "queue_depth": 128 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "nbd", 00:18:03.285 "config": [] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "nvmf", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "nvmf_set_config", 00:18:03.285 "params": { 00:18:03.285 "discovery_filter": "match_any", 00:18:03.285 "admin_cmd_passthru": { 00:18:03.285 "identify_ctrlr": false 00:18:03.285 } 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "nvmf_set_max_subsystems", 00:18:03.285 "params": { 00:18:03.285 "max_subsystems": 1024 00:18:03.285 } 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "method": "nvmf_set_crdt", 00:18:03.285 "params": { 00:18:03.285 "crdt1": 0, 00:18:03.285 "crdt2": 0, 00:18:03.285 "crdt3": 0 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }, 00:18:03.285 { 00:18:03.285 "subsystem": "iscsi", 00:18:03.285 "config": [ 00:18:03.285 { 00:18:03.285 "method": "iscsi_set_options", 00:18:03.285 "params": { 00:18:03.285 "node_base": "iqn.2016-06.io.spdk", 00:18:03.285 "max_sessions": 128, 00:18:03.285 "max_connections_per_session": 2, 00:18:03.285 "max_queue_depth": 64, 00:18:03.285 "default_time2wait": 2, 00:18:03.285 "default_time2retain": 20, 00:18:03.285 "first_burst_length": 8192, 00:18:03.285 "immediate_data": true, 00:18:03.285 "allow_duplicated_isid": false, 00:18:03.285 "error_recovery_level": 0, 00:18:03.285 "nop_timeout": 60, 00:18:03.285 "nop_in_interval": 30, 00:18:03.285 "disable_chap": false, 00:18:03.285 "require_chap": false, 00:18:03.285 "mutual_chap": false, 00:18:03.285 "chap_group": 0, 00:18:03.285 "max_large_datain_per_connection": 64, 00:18:03.285 "max_r2t_per_connection": 4, 00:18:03.285 "pdu_pool_size": 36864, 00:18:03.285 "immediate_data_pool_size": 16384, 00:18:03.285 "data_out_pool_size": 2048 00:18:03.285 } 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 } 00:18:03.285 ] 00:18:03.285 }' 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77885 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77885 ']' 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77885 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77885 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.285 killing process with pid 77885 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77885' 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77885 00:18:03.285 09:23:49 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77885 00:18:04.662 [2024-07-12 09:23:50.740647] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:04.662 [2024-07-12 09:23:50.780243] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:04.662 [2024-07-12 09:23:50.780453] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:04.662 [2024-07-12 09:23:50.788226] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:04.662 [2024-07-12 09:23:50.788296] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:04.662 [2024-07-12 09:23:50.788309] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:04.662 [2024-07-12 09:23:50.788341] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:04.662 [2024-07-12 09:23:50.788531] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:06.035 09:23:52 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77940 00:18:06.035 09:23:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:06.035 09:23:52 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77940 00:18:06.035 09:23:52 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77940 ']' 00:18:06.035 09:23:52 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.035 09:23:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:06.035 "subsystems": [ 00:18:06.035 { 00:18:06.035 "subsystem": "keyring", 00:18:06.035 "config": [] 00:18:06.035 }, 00:18:06.035 { 00:18:06.035 "subsystem": "iobuf", 00:18:06.035 "config": [ 00:18:06.035 { 00:18:06.035 "method": "iobuf_set_options", 00:18:06.035 "params": { 00:18:06.035 "small_pool_count": 8192, 00:18:06.035 "large_pool_count": 1024, 00:18:06.035 "small_bufsize": 8192, 00:18:06.035 "large_bufsize": 135168 00:18:06.035 } 00:18:06.035 } 00:18:06.035 ] 00:18:06.035 }, 00:18:06.036 { 00:18:06.036 "subsystem": "sock", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "sock_set_default_impl", 00:18:06.036 "params": { 00:18:06.036 "impl_name": "posix" 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "sock_impl_set_options", 00:18:06.036 "params": { 00:18:06.036 "impl_name": "ssl", 00:18:06.036 "recv_buf_size": 4096, 00:18:06.036 "send_buf_size": 4096, 00:18:06.036 "enable_recv_pipe": true, 00:18:06.036 "enable_quickack": false, 00:18:06.036 "enable_placement_id": 0, 00:18:06.036 "enable_zerocopy_send_server": true, 00:18:06.036 "enable_zerocopy_send_client": false, 00:18:06.036 "zerocopy_threshold": 0, 00:18:06.036 "tls_version": 0, 00:18:06.036 "enable_ktls": false 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "sock_impl_set_options", 00:18:06.036 "params": { 00:18:06.036 "impl_name": "posix", 00:18:06.036 "recv_buf_size": 2097152, 00:18:06.036 "send_buf_size": 2097152, 00:18:06.036 "enable_recv_pipe": true, 00:18:06.036 "enable_quickack": false, 00:18:06.036 "enable_placement_id": 0, 00:18:06.036 "enable_zerocopy_send_server": true, 00:18:06.036 "enable_zerocopy_send_client": false, 00:18:06.036 "zerocopy_threshold": 0, 00:18:06.036 "tls_version": 0, 00:18:06.036 "enable_ktls": false 00:18:06.036 } 00:18:06.036 } 00:18:06.036 ] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "vmd", 00:18:06.036 "config": [] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "accel", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "accel_set_options", 00:18:06.036 "params": { 00:18:06.036 "small_cache_size": 128, 00:18:06.036 "large_cache_size": 16, 00:18:06.036 "task_count": 2048, 00:18:06.036 "sequence_count": 2048, 00:18:06.036 "buf_count": 2048 00:18:06.036 } 00:18:06.036 } 00:18:06.036 ] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "bdev", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "bdev_set_options", 00:18:06.036 "params": { 00:18:06.036 "bdev_io_pool_size": 65535, 00:18:06.036 "bdev_io_cache_size": 256, 00:18:06.036 "bdev_auto_examine": true, 00:18:06.036 "iobuf_small_cache_size": 128, 00:18:06.036 "iobuf_large_cache_size": 16 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "bdev_raid_set_options", 00:18:06.036 "params": { 00:18:06.036 "process_window_size_kb": 1024 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "bdev_iscsi_set_options", 00:18:06.036 "params": { 00:18:06.036 "timeout_sec": 30 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "bdev_nvme_set_options", 00:18:06.036 "params": { 00:18:06.036 "action_on_timeout": "none", 00:18:06.036 "timeout_us": 0, 00:18:06.036 "timeout_admin_us": 0, 00:18:06.036 "keep_alive_timeout_ms": 10000, 00:18:06.036 "arbitration_burst": 0, 00:18:06.036 "low_priority_weight": 0, 00:18:06.036 "medium_priority_weight": 0, 00:18:06.036 "high_priority_weight": 0, 00:18:06.036 "nvme_adminq_poll_period_us": 10000, 00:18:06.036 "nvme_ioq_poll_period_us": 0, 00:18:06.036 "io_queue_requests": 0, 00:18:06.036 "delay_cmd_submit": true, 00:18:06.036 "transport_retry_count": 4, 00:18:06.036 "bdev_retry_count": 3, 00:18:06.036 "transport_ack_timeout": 0, 00:18:06.036 "ctrlr_loss_timeout_sec": 0, 00:18:06.036 "reconnect_delay_sec": 0, 00:18:06.036 "fast_io_fail_timeout_sec": 0, 00:18:06.036 "disable_auto_failback": false, 00:18:06.036 "generate_uuids": false, 00:18:06.036 "transport_tos": 0, 00:18:06.036 "nvme_error_stat": false, 00:18:06.036 "rdma_srq_size": 0, 00:18:06.036 "io_path_stat": false, 00:18:06.036 "allow_accel_sequence": false, 00:18:06.036 "rdma_max_cq_size": 0, 00:18:06.036 "rdma_cm_event_timeout_ms": 0, 00:18:06.036 "dhchap_digests": [ 00:18:06.036 "sha256", 00:18:06.036 "sha384", 00:18:06.036 "sha512" 00:18:06.036 ], 00:18:06.036 "dhchap_dhgroups": [ 00:18:06.036 "null", 00:18:06.036 "ffdhe2048", 00:18:06.036 "ffdhe3072", 00:18:06.036 "ffdhe4096", 00:18:06.036 "ffdhe6144", 00:18:06.036 "ffdhe8192" 00:18:06.036 ] 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "bdev_nvme_set_hotplug", 00:18:06.036 "params": { 00:18:06.036 "period_us": 100000, 00:18:06.036 "enable": false 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "bdev_malloc_create", 00:18:06.036 "params": { 00:18:06.036 "name": "malloc0", 00:18:06.036 "num_blocks": 8192, 00:18:06.036 "block_size": 4096, 00:18:06.036 "physical_block_size": 4096, 00:18:06.036 "uuid": "1a510dd0-77cf-48d8-b5a2-cc7dd3e236b4", 00:18:06.036 "optimal_io_boundary": 0 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "bdev_wait_for_examine" 00:18:06.036 } 00:18:06.036 ] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "scsi", 00:18:06.036 "config": null 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "scheduler", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "framework_set_scheduler", 00:18:06.036 "params": { 00:18:06.036 "name": "static" 00:18:06.036 } 00:18:06.036 } 00:18:06.036 ] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "vhost_scsi", 00:18:06.036 "config": [] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "vhost_blk", 00:18:06.036 "config": [] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "ublk", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "ublk_create_target", 00:18:06.036 "params": { 00:18:06.036 "cpumask": "1" 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "ublk_start_disk", 00:18:06.036 "params": { 00:18:06.036 "bdev_name": "malloc0", 00:18:06.036 "ublk_id": 0, 00:18:06.036 "num_queues": 1, 00:18:06.036 "queue_depth": 128 00:18:06.036 } 00:18:06.036 } 00:18:06.036 ] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "nbd", 00:18:06.036 "config": [] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "nvmf", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "nvmf_set_config", 00:18:06.036 "params": { 00:18:06.036 "discovery_filter": "match_any", 00:18:06.036 "admin_cmd_passthru": { 00:18:06.036 "identify_ctrlr": false 00:18:06.036 } 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "nvmf_set_max_subsystems", 00:18:06.036 "params": { 00:18:06.036 "max_subsystems": 1024 00:18:06.036 } 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "method": "nvmf_set_crdt", 00:18:06.036 "params": { 00:18:06.036 "crdt1": 0, 00:18:06.036 "crdt2": 0, 00:18:06.036 "crdt3": 0 00:18:06.036 } 00:18:06.036 } 00:18:06.036 ] 00:18:06.036 }, 00:18:06.036 { 00:18:06.036 "subsystem": "iscsi", 00:18:06.036 "config": [ 00:18:06.036 { 00:18:06.036 "method": "iscsi_set_options", 00:18:06.036 "params": { 00:18:06.036 "node_base": "iqn.2016-06.io.spdk", 00:18:06.036 "max_sessions": 128, 00:18:06.036 "max_connections_per_session": 2, 00:18:06.036 "max_queue_depth": 64, 00:18:06.037 "default_time2wait": 2, 00:18:06.037 "default_time2retain": 20, 00:18:06.037 "first_burst_length": 8192, 00:18:06.037 "immediate_data": true, 00:18:06.037 "allow_duplicated_isid": false, 00:18:06.037 "error_recovery_level": 0, 00:18:06.037 "nop_timeout": 60, 00:18:06.037 "nop_in_interval": 30, 00:18:06.037 "disable_chap": false, 00:18:06.037 "require_chap": false, 00:18:06.037 "mutual_chap": false, 00:18:06.037 "chap_group": 0, 00:18:06.037 "max_large_datain_per_connection": 64, 00:18:06.037 "max_r2t_per_connection": 4, 00:18:06.037 "pdu_pool_size": 36864, 00:18:06.037 "immediate_data_pool_size": 16384, 00:18:06.037 "data_out_pool_size": 2048 00:18:06.037 } 00:18:06.037 } 00:18:06.037 ] 00:18:06.037 } 00:18:06.037 ] 00:18:06.037 }' 00:18:06.037 09:23:52 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.037 09:23:52 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.037 09:23:52 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.037 09:23:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:06.037 [2024-07-12 09:23:52.134147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:06.037 [2024-07-12 09:23:52.134348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77940 ] 00:18:06.037 [2024-07-12 09:23:52.307539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.294 [2024-07-12 09:23:52.555217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.224 [2024-07-12 09:23:53.442226] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:07.224 [2024-07-12 09:23:53.443369] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:07.224 [2024-07-12 09:23:53.450369] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:07.224 [2024-07-12 09:23:53.450471] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:07.224 [2024-07-12 09:23:53.450490] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:07.224 [2024-07-12 09:23:53.450500] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:07.224 [2024-07-12 09:23:53.458321] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:07.224 [2024-07-12 09:23:53.458350] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:07.224 [2024-07-12 09:23:53.465278] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:07.224 [2024-07-12 09:23:53.465422] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:07.224 [2024-07-12 09:23:53.480297] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:07.224 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77940 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77940 ']' 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77940 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77940 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:07.481 killing process with pid 77940 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77940' 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77940 00:18:07.481 09:23:53 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77940 00:18:08.864 [2024-07-12 09:23:55.021016] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:08.864 [2024-07-12 09:23:55.056240] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:08.864 [2024-07-12 09:23:55.056469] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:08.864 [2024-07-12 09:23:55.063260] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:08.864 [2024-07-12 09:23:55.063360] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:08.864 [2024-07-12 09:23:55.063375] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:08.864 [2024-07-12 09:23:55.063407] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:08.864 [2024-07-12 09:23:55.063633] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:10.239 09:23:56 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:10.239 00:18:10.239 real 0m8.584s 00:18:10.239 user 0m7.608s 00:18:10.239 sys 0m1.949s 00:18:10.239 09:23:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:10.239 09:23:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:10.239 ************************************ 00:18:10.239 END TEST test_save_ublk_config 00:18:10.239 ************************************ 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:10.239 09:23:56 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78018 00:18:10.239 09:23:56 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:10.239 09:23:56 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.239 09:23:56 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78018 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@829 -- # '[' -z 78018 ']' 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.239 09:23:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:10.239 [2024-07-12 09:23:56.519063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:10.239 [2024-07-12 09:23:56.519257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78018 ] 00:18:10.498 [2024-07-12 09:23:56.697991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:10.757 [2024-07-12 09:23:56.902522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.757 [2024-07-12 09:23:56.902533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.323 09:23:57 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.323 09:23:57 ublk -- common/autotest_common.sh@862 -- # return 0 00:18:11.323 09:23:57 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:11.323 09:23:57 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:11.323 09:23:57 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.323 09:23:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.323 ************************************ 00:18:11.323 START TEST test_create_ublk 00:18:11.323 ************************************ 00:18:11.323 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:18:11.323 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:11.580 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.580 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.580 [2024-07-12 09:23:57.684214] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:11.580 [2024-07-12 09:23:57.686777] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:11.580 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.580 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:11.580 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:11.580 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.580 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.838 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.838 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:11.838 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:11.838 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.838 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.838 [2024-07-12 09:23:57.944409] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:11.838 [2024-07-12 09:23:57.944941] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:11.838 [2024-07-12 09:23:57.944976] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:11.838 [2024-07-12 09:23:57.944999] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:11.838 [2024-07-12 09:23:57.953423] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:11.838 [2024-07-12 09:23:57.953466] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:11.838 [2024-07-12 09:23:57.960234] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:11.838 [2024-07-12 09:23:57.972500] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:11.838 [2024-07-12 09:23:57.987336] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:11.838 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.838 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:11.838 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:11.838 09:23:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:11.838 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.838 09:23:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:11.838 09:23:58 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:11.838 { 00:18:11.838 "ublk_device": "/dev/ublkb0", 00:18:11.838 "id": 0, 00:18:11.838 "queue_depth": 512, 00:18:11.838 "num_queues": 4, 00:18:11.838 "bdev_name": "Malloc0" 00:18:11.838 } 00:18:11.838 ]' 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:11.838 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:12.097 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:12.097 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:12.097 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:12.097 09:23:58 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:12.097 09:23:58 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:12.097 fio: verification read phase will never start because write phase uses all of runtime 00:18:12.097 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:12.097 fio-3.35 00:18:12.097 Starting 1 process 00:18:24.296 00:18:24.296 fio_test: (groupid=0, jobs=1): err= 0: pid=78068: Fri Jul 12 09:24:08 2024 00:18:24.296 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(409MiB/10001msec); 0 zone resets 00:18:24.296 clat (usec): min=70, max=10946, avg=93.88, stdev=167.49 00:18:24.296 lat (usec): min=72, max=10960, avg=94.68, stdev=167.51 00:18:24.296 clat percentiles (usec): 00:18:24.296 | 1.00th=[ 77], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 80], 00:18:24.296 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:18:24.296 | 70.00th=[ 85], 80.00th=[ 89], 90.00th=[ 94], 95.00th=[ 99], 00:18:24.296 | 99.00th=[ 116], 99.50th=[ 139], 99.90th=[ 3261], 99.95th=[ 3687], 00:18:24.296 | 99.99th=[ 4080] 00:18:24.296 bw ( KiB/s): min=17488, max=43968, per=99.92%, avg=41872.00, stdev=5920.57, samples=19 00:18:24.296 iops : min= 4372, max=10992, avg=10468.00, stdev=1480.14, samples=19 00:18:24.296 lat (usec) : 100=95.80%, 250=3.77%, 500=0.01%, 750=0.01%, 1000=0.03% 00:18:24.296 lat (msec) : 2=0.13%, 4=0.24%, 10=0.01%, 20=0.01% 00:18:24.296 cpu : usr=3.11%, sys=7.74%, ctx=104796, majf=0, minf=797 00:18:24.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.296 issued rwts: total=0,104776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:24.296 00:18:24.296 Run status group 0 (all jobs): 00:18:24.296 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=409MiB (429MB), run=10001-10001msec 00:18:24.296 00:18:24.296 Disk stats (read/write): 00:18:24.296 ublkb0: ios=0/103666, merge=0/0, ticks=0/8869, in_queue=8870, util=99.09% 00:18:24.296 09:24:08 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:24.296 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.296 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.296 [2024-07-12 09:24:08.499275] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:24.296 [2024-07-12 09:24:08.546684] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:24.297 [2024-07-12 09:24:08.551529] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:24.297 [2024-07-12 09:24:08.562234] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:24.297 [2024-07-12 09:24:08.562623] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:24.297 [2024-07-12 09:24:08.562643] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 [2024-07-12 09:24:08.571375] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:24.297 request: 00:18:24.297 { 00:18:24.297 "ublk_id": 0, 00:18:24.297 "method": "ublk_stop_disk", 00:18:24.297 "req_id": 1 00:18:24.297 } 00:18:24.297 Got JSON-RPC error response 00:18:24.297 response: 00:18:24.297 { 00:18:24.297 "code": -19, 00:18:24.297 "message": "No such device" 00:18:24.297 } 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.297 09:24:08 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 [2024-07-12 09:24:08.586324] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:24.297 [2024-07-12 09:24:08.594228] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:24.297 [2024-07-12 09:24:08.594308] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 09:24:08 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:24.297 09:24:08 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:24.297 09:24:09 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:24.297 00:18:24.297 real 0m11.354s 00:18:24.297 user 0m0.730s 00:18:24.297 sys 0m0.885s 00:18:24.297 09:24:09 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 ************************************ 00:18:24.297 END TEST test_create_ublk 00:18:24.297 ************************************ 00:18:24.297 09:24:09 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:24.297 09:24:09 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:24.297 09:24:09 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:24.297 09:24:09 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.297 09:24:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 ************************************ 00:18:24.297 START TEST test_create_multi_ublk 00:18:24.297 ************************************ 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 [2024-07-12 09:24:09.094224] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:24.297 [2024-07-12 09:24:09.096533] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 [2024-07-12 09:24:09.338372] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:24.297 [2024-07-12 09:24:09.338850] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:24.297 [2024-07-12 09:24:09.338876] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:24.297 [2024-07-12 09:24:09.338887] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:24.297 [2024-07-12 09:24:09.347426] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:24.297 [2024-07-12 09:24:09.347462] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:24.297 [2024-07-12 09:24:09.354238] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:24.297 [2024-07-12 09:24:09.354984] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:24.297 [2024-07-12 09:24:09.365324] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.297 [2024-07-12 09:24:09.622395] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:24.297 [2024-07-12 09:24:09.622875] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:24.297 [2024-07-12 09:24:09.622897] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:24.297 [2024-07-12 09:24:09.622910] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:24.297 [2024-07-12 09:24:09.630251] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:24.297 [2024-07-12 09:24:09.630285] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:24.297 [2024-07-12 09:24:09.638228] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:24.297 [2024-07-12 09:24:09.638995] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:24.297 [2024-07-12 09:24:09.662237] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.297 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 [2024-07-12 09:24:09.923423] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:24.298 [2024-07-12 09:24:09.923918] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:24.298 [2024-07-12 09:24:09.923945] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:24.298 [2024-07-12 09:24:09.923956] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:24.298 [2024-07-12 09:24:09.931240] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:24.298 [2024-07-12 09:24:09.931267] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:24.298 [2024-07-12 09:24:09.939240] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:24.298 [2024-07-12 09:24:09.940001] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:24.298 [2024-07-12 09:24:09.942971] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.298 09:24:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 [2024-07-12 09:24:10.203454] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:24.298 [2024-07-12 09:24:10.203973] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:24.298 [2024-07-12 09:24:10.203997] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:24.298 [2024-07-12 09:24:10.204011] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:24.298 [2024-07-12 09:24:10.211577] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:24.298 [2024-07-12 09:24:10.211645] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:24.298 [2024-07-12 09:24:10.219251] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:24.298 [2024-07-12 09:24:10.220067] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:24.298 [2024-07-12 09:24:10.225478] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:24.298 { 00:18:24.298 "ublk_device": "/dev/ublkb0", 00:18:24.298 "id": 0, 00:18:24.298 "queue_depth": 512, 00:18:24.298 "num_queues": 4, 00:18:24.298 "bdev_name": "Malloc0" 00:18:24.298 }, 00:18:24.298 { 00:18:24.298 "ublk_device": "/dev/ublkb1", 00:18:24.298 "id": 1, 00:18:24.298 "queue_depth": 512, 00:18:24.298 "num_queues": 4, 00:18:24.298 "bdev_name": "Malloc1" 00:18:24.298 }, 00:18:24.298 { 00:18:24.298 "ublk_device": "/dev/ublkb2", 00:18:24.298 "id": 2, 00:18:24.298 "queue_depth": 512, 00:18:24.298 "num_queues": 4, 00:18:24.298 "bdev_name": "Malloc2" 00:18:24.298 }, 00:18:24.298 { 00:18:24.298 "ublk_device": "/dev/ublkb3", 00:18:24.298 "id": 3, 00:18:24.298 "queue_depth": 512, 00:18:24.298 "num_queues": 4, 00:18:24.298 "bdev_name": "Malloc3" 00:18:24.298 } 00:18:24.298 ]' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:24.298 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:24.556 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:24.557 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:24.557 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:24.815 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:24.815 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:24.815 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:24.815 09:24:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:24.815 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.074 [2024-07-12 09:24:11.280473] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:25.074 [2024-07-12 09:24:11.321694] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:25.074 [2024-07-12 09:24:11.323290] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:25.074 [2024-07-12 09:24:11.324710] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:25.074 [2024-07-12 09:24:11.325049] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:25.074 [2024-07-12 09:24:11.325064] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.074 [2024-07-12 09:24:11.343361] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:25.074 [2024-07-12 09:24:11.380744] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:25.074 [2024-07-12 09:24:11.384712] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:25.074 [2024-07-12 09:24:11.391218] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:25.074 [2024-07-12 09:24:11.391700] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:25.074 [2024-07-12 09:24:11.391725] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.074 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.074 [2024-07-12 09:24:11.399474] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:25.334 [2024-07-12 09:24:11.439308] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:25.334 [2024-07-12 09:24:11.440684] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:25.334 [2024-07-12 09:24:11.447256] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:25.334 [2024-07-12 09:24:11.447683] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:25.334 [2024-07-12 09:24:11.447709] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.334 [2024-07-12 09:24:11.463460] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:25.334 [2024-07-12 09:24:11.503318] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:25.334 [2024-07-12 09:24:11.504625] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:25.334 [2024-07-12 09:24:11.512282] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:25.334 [2024-07-12 09:24:11.512637] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:25.334 [2024-07-12 09:24:11.512656] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.334 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:25.592 [2024-07-12 09:24:11.798353] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:25.592 [2024-07-12 09:24:11.804236] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:25.592 [2024-07-12 09:24:11.804302] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:25.592 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:25.592 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:25.592 09:24:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:25.592 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.592 09:24:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.851 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.851 09:24:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:25.851 09:24:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:25.851 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.851 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.109 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.109 09:24:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.109 09:24:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:26.109 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.109 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.676 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.676 09:24:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:26.676 09:24:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:26.676 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.676 09:24:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:26.933 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:26.934 00:18:26.934 real 0m4.089s 00:18:26.934 user 0m1.309s 00:18:26.934 sys 0m0.172s 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.934 09:24:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:26.934 ************************************ 00:18:26.934 END TEST test_create_multi_ublk 00:18:26.934 ************************************ 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@1142 -- # return 0 00:18:26.934 09:24:13 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:26.934 09:24:13 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:26.934 09:24:13 ublk -- ublk/ublk.sh@130 -- # killprocess 78018 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@948 -- # '[' -z 78018 ']' 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@952 -- # kill -0 78018 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@953 -- # uname 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78018 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.934 killing process with pid 78018 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78018' 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@967 -- # kill 78018 00:18:26.934 09:24:13 ublk -- common/autotest_common.sh@972 -- # wait 78018 00:18:27.865 [2024-07-12 09:24:14.214397] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:18:27.865 [2024-07-12 09:24:14.214472] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:18:29.241 00:18:29.241 real 0m27.670s 00:18:29.241 user 0m42.381s 00:18:29.241 sys 0m7.735s 00:18:29.241 09:24:15 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:29.241 09:24:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:29.241 ************************************ 00:18:29.241 END TEST ublk 00:18:29.241 ************************************ 00:18:29.241 09:24:15 -- common/autotest_common.sh@1142 -- # return 0 00:18:29.241 09:24:15 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:29.241 09:24:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:29.241 09:24:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.241 09:24:15 -- common/autotest_common.sh@10 -- # set +x 00:18:29.241 ************************************ 00:18:29.241 START TEST ublk_recovery 00:18:29.241 ************************************ 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:29.241 * Looking for test storage... 00:18:29.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:29.241 09:24:15 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:29.241 09:24:15 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:29.241 09:24:15 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:29.241 09:24:15 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78401 00:18:29.241 09:24:15 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.241 09:24:15 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:29.241 09:24:15 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78401 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78401 ']' 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.241 09:24:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.241 [2024-07-12 09:24:15.582755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:29.241 [2024-07-12 09:24:15.582940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78401 ] 00:18:29.500 [2024-07-12 09:24:15.750932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:29.759 [2024-07-12 09:24:15.974373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.759 [2024-07-12 09:24:15.974381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:30.326 09:24:16 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.326 [2024-07-12 09:24:16.670286] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:30.326 [2024-07-12 09:24:16.672938] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.326 09:24:16 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.326 09:24:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.584 malloc0 00:18:30.584 09:24:16 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.584 09:24:16 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:30.584 09:24:16 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.584 09:24:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.584 [2024-07-12 09:24:16.798442] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:30.584 [2024-07-12 09:24:16.798601] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:30.584 [2024-07-12 09:24:16.798618] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:30.584 [2024-07-12 09:24:16.798629] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:30.584 [2024-07-12 09:24:16.807333] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:30.584 [2024-07-12 09:24:16.807367] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:30.584 [2024-07-12 09:24:16.814222] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:30.584 [2024-07-12 09:24:16.814425] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:30.584 [2024-07-12 09:24:16.825238] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:30.584 1 00:18:30.584 09:24:16 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.584 09:24:16 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:31.520 09:24:17 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78436 00:18:31.520 09:24:17 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:31.520 09:24:17 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:31.779 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:31.779 fio-3.35 00:18:31.779 Starting 1 process 00:18:37.045 09:24:22 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78401 00:18:37.045 09:24:22 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:42.313 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78401 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:42.313 09:24:27 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78544 00:18:42.313 09:24:27 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:42.313 09:24:27 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.313 09:24:27 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78544 00:18:42.313 09:24:27 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78544 ']' 00:18:42.313 09:24:27 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.313 09:24:27 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.313 09:24:27 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.313 09:24:27 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.313 09:24:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.313 [2024-07-12 09:24:27.992880] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:42.313 [2024-07-12 09:24:27.993053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78544 ] 00:18:42.313 [2024-07-12 09:24:28.177927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:42.313 [2024-07-12 09:24:28.434556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.313 [2024-07-12 09:24:28.434564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:18:42.880 09:24:29 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.880 [2024-07-12 09:24:29.183214] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:42.880 [2024-07-12 09:24:29.185701] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.880 09:24:29 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.880 09:24:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.138 malloc0 00:18:43.138 09:24:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.138 09:24:29 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:43.138 09:24:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.138 09:24:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.138 [2024-07-12 09:24:29.317391] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:43.138 [2024-07-12 09:24:29.317452] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:43.138 [2024-07-12 09:24:29.317465] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:43.138 [2024-07-12 09:24:29.325343] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:43.138 [2024-07-12 09:24:29.325372] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:43.138 [2024-07-12 09:24:29.325475] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:43.138 1 00:18:43.138 09:24:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.138 09:24:29 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78436 00:19:09.668 [2024-07-12 09:24:53.034254] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:09.668 [2024-07-12 09:24:53.040638] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:09.668 [2024-07-12 09:24:53.046541] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:09.668 [2024-07-12 09:24:53.046610] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:36.233 00:19:36.233 fio_test: (groupid=0, jobs=1): err= 0: pid=78439: Fri Jul 12 09:25:18 2024 00:19:36.233 read: IOPS=9908, BW=38.7MiB/s (40.6MB/s)(2322MiB/60002msec) 00:19:36.233 slat (nsec): min=1994, max=736436, avg=6599.71, stdev=2842.42 00:19:36.233 clat (usec): min=849, max=30216k, avg=5891.58, stdev=287911.90 00:19:36.233 lat (usec): min=854, max=30216k, avg=5898.18, stdev=287911.90 00:19:36.233 clat percentiles (usec): 00:19:36.233 | 1.00th=[ 2540], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2900], 00:19:36.233 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:19:36.233 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 4424], 00:19:36.233 | 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 8586], 99.95th=[ 9372], 00:19:36.233 | 99.99th=[13435] 00:19:36.233 bw ( KiB/s): min= 2472, max=85392, per=100.00%, avg=78066.63, stdev=13318.92, samples=60 00:19:36.233 iops : min= 618, max=21348, avg=19516.63, stdev=3329.72, samples=60 00:19:36.233 write: IOPS=9898, BW=38.7MiB/s (40.5MB/s)(2320MiB/60002msec); 0 zone resets 00:19:36.233 slat (usec): min=2, max=1154, avg= 6.59, stdev= 3.17 00:19:36.233 clat (usec): min=878, max=30216k, avg=7018.39, stdev=337214.34 00:19:36.233 lat (usec): min=883, max=30216k, avg=7024.98, stdev=337214.34 00:19:36.233 clat percentiles (msec): 00:19:36.233 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:19:36.233 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:19:36.233 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:19:36.233 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 10], 00:19:36.233 | 99.99th=[17113] 00:19:36.233 bw ( KiB/s): min= 2664, max=83232, per=100.00%, avg=77988.08, stdev=13281.41, samples=60 00:19:36.233 iops : min= 666, max=20808, avg=19497.00, stdev=3320.35, samples=60 00:19:36.233 lat (usec) : 1000=0.01% 00:19:36.233 lat (msec) : 2=0.06%, 4=93.93%, 10=5.97%, 20=0.02%, >=2000=0.01% 00:19:36.233 cpu : usr=5.74%, sys=12.18%, ctx=38974, majf=0, minf=13 00:19:36.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:36.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.233 issued rwts: total=594548,593909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.233 00:19:36.233 Run status group 0 (all jobs): 00:19:36.233 READ: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=2322MiB (2435MB), run=60002-60002msec 00:19:36.233 WRITE: bw=38.7MiB/s (40.5MB/s), 38.7MiB/s-38.7MiB/s (40.5MB/s-40.5MB/s), io=2320MiB (2433MB), run=60002-60002msec 00:19:36.233 00:19:36.233 Disk stats (read/write): 00:19:36.233 ublkb1: ios=592293/591633, merge=0/0, ticks=3441861/4038719, in_queue=7480581, util=99.93% 00:19:36.233 09:25:18 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.233 [2024-07-12 09:25:18.091071] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:36.233 [2024-07-12 09:25:18.136268] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:36.233 [2024-07-12 09:25:18.140216] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:36.233 [2024-07-12 09:25:18.149240] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:36.233 [2024-07-12 09:25:18.149417] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:36.233 [2024-07-12 09:25:18.149436] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.233 09:25:18 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.233 [2024-07-12 09:25:18.159360] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:36.233 [2024-07-12 09:25:18.167225] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:36.233 [2024-07-12 09:25:18.167279] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.233 09:25:18 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:36.233 09:25:18 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:36.233 09:25:18 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78544 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 78544 ']' 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 78544 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78544 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.233 killing process with pid 78544 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78544' 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@967 -- # kill 78544 00:19:36.233 09:25:18 ublk_recovery -- common/autotest_common.sh@972 -- # wait 78544 00:19:36.233 [2024-07-12 09:25:19.154641] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:19:36.233 [2024-07-12 09:25:19.154712] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:19:36.233 00:19:36.233 real 1m5.078s 00:19:36.233 user 1m51.296s 00:19:36.233 sys 0m18.677s 00:19:36.233 09:25:20 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:36.233 09:25:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.233 ************************************ 00:19:36.233 END TEST ublk_recovery 00:19:36.233 ************************************ 00:19:36.233 09:25:20 -- common/autotest_common.sh@1142 -- # return 0 00:19:36.233 09:25:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:36.233 09:25:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.233 09:25:20 -- common/autotest_common.sh@10 -- # set +x 00:19:36.233 09:25:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:19:36.233 09:25:20 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:36.233 09:25:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:36.233 09:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.233 09:25:20 -- common/autotest_common.sh@10 -- # set +x 00:19:36.233 ************************************ 00:19:36.233 START TEST ftl 00:19:36.233 ************************************ 00:19:36.233 09:25:20 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:36.233 * Looking for test storage... 00:19:36.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:36.234 09:25:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:36.234 09:25:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.234 09:25:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:36.234 09:25:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:36.234 09:25:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:36.234 09:25:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.234 09:25:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:36.234 09:25:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:36.234 09:25:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.234 09:25:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.234 09:25:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:36.234 09:25:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:36.234 09:25:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:36.234 09:25:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:36.234 09:25:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:36.234 09:25:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:36.234 09:25:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.234 09:25:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.234 09:25:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:36.234 09:25:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:36.234 09:25:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:36.234 09:25:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:36.234 09:25:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:36.234 09:25:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:36.234 09:25:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:36.234 09:25:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:36.234 09:25:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.234 09:25:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:36.234 09:25:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:36.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:36.234 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:36.234 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:36.234 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:36.234 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:36.234 09:25:21 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79327 00:19:36.234 09:25:21 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79327 00:19:36.234 09:25:21 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:36.234 09:25:21 ftl -- common/autotest_common.sh@829 -- # '[' -z 79327 ']' 00:19:36.234 09:25:21 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.234 09:25:21 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.234 09:25:21 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.234 09:25:21 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.234 09:25:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:36.234 [2024-07-12 09:25:21.327250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:36.234 [2024-07-12 09:25:21.327472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79327 ] 00:19:36.234 [2024-07-12 09:25:21.499453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.234 [2024-07-12 09:25:21.688383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.234 09:25:22 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.234 09:25:22 ftl -- common/autotest_common.sh@862 -- # return 0 00:19:36.234 09:25:22 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:36.234 09:25:22 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:37.177 09:25:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:37.177 09:25:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:37.743 09:25:23 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:37.743 09:25:23 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:37.743 09:25:23 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@50 -- # break 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:38.001 09:25:24 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:38.260 09:25:24 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:38.260 09:25:24 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:38.260 09:25:24 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:38.260 09:25:24 ftl -- ftl/ftl.sh@63 -- # break 00:19:38.260 09:25:24 ftl -- ftl/ftl.sh@66 -- # killprocess 79327 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@948 -- # '[' -z 79327 ']' 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@952 -- # kill -0 79327 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@953 -- # uname 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79327 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79327' 00:19:38.260 killing process with pid 79327 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@967 -- # kill 79327 00:19:38.260 09:25:24 ftl -- common/autotest_common.sh@972 -- # wait 79327 00:19:40.793 09:25:26 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:40.793 09:25:26 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:40.793 09:25:26 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:40.793 09:25:26 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.793 09:25:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:40.793 ************************************ 00:19:40.793 START TEST ftl_fio_basic 00:19:40.793 ************************************ 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:40.793 * Looking for test storage... 00:19:40.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:40.793 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79468 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79468 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79468 ']' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.794 09:25:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:40.794 [2024-07-12 09:25:26.812965] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:40.794 [2024-07-12 09:25:26.813139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79468 ] 00:19:40.794 [2024-07-12 09:25:26.985765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.052 [2024-07-12 09:25:27.177004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.052 [2024-07-12 09:25:27.177127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.052 [2024-07-12 09:25:27.177138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:41.619 09:25:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:42.185 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:42.443 { 00:19:42.443 "name": "nvme0n1", 00:19:42.443 "aliases": [ 00:19:42.443 "a76663ff-de5e-46d3-8a3a-3bd482fe6af9" 00:19:42.443 ], 00:19:42.443 "product_name": "NVMe disk", 00:19:42.443 "block_size": 4096, 00:19:42.443 "num_blocks": 1310720, 00:19:42.443 "uuid": "a76663ff-de5e-46d3-8a3a-3bd482fe6af9", 00:19:42.443 "assigned_rate_limits": { 00:19:42.443 "rw_ios_per_sec": 0, 00:19:42.443 "rw_mbytes_per_sec": 0, 00:19:42.443 "r_mbytes_per_sec": 0, 00:19:42.443 "w_mbytes_per_sec": 0 00:19:42.443 }, 00:19:42.443 "claimed": false, 00:19:42.443 "zoned": false, 00:19:42.443 "supported_io_types": { 00:19:42.443 "read": true, 00:19:42.443 "write": true, 00:19:42.443 "unmap": true, 00:19:42.443 "flush": true, 00:19:42.443 "reset": true, 00:19:42.443 "nvme_admin": true, 00:19:42.443 "nvme_io": true, 00:19:42.443 "nvme_io_md": false, 00:19:42.443 "write_zeroes": true, 00:19:42.443 "zcopy": false, 00:19:42.443 "get_zone_info": false, 00:19:42.443 "zone_management": false, 00:19:42.443 "zone_append": false, 00:19:42.443 "compare": true, 00:19:42.443 "compare_and_write": false, 00:19:42.443 "abort": true, 00:19:42.443 "seek_hole": false, 00:19:42.443 "seek_data": false, 00:19:42.443 "copy": true, 00:19:42.443 "nvme_iov_md": false 00:19:42.443 }, 00:19:42.443 "driver_specific": { 00:19:42.443 "nvme": [ 00:19:42.443 { 00:19:42.443 "pci_address": "0000:00:11.0", 00:19:42.443 "trid": { 00:19:42.443 "trtype": "PCIe", 00:19:42.443 "traddr": "0000:00:11.0" 00:19:42.443 }, 00:19:42.443 "ctrlr_data": { 00:19:42.443 "cntlid": 0, 00:19:42.443 "vendor_id": "0x1b36", 00:19:42.443 "model_number": "QEMU NVMe Ctrl", 00:19:42.443 "serial_number": "12341", 00:19:42.443 "firmware_revision": "8.0.0", 00:19:42.443 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:42.443 "oacs": { 00:19:42.443 "security": 0, 00:19:42.443 "format": 1, 00:19:42.443 "firmware": 0, 00:19:42.443 "ns_manage": 1 00:19:42.443 }, 00:19:42.443 "multi_ctrlr": false, 00:19:42.443 "ana_reporting": false 00:19:42.443 }, 00:19:42.443 "vs": { 00:19:42.443 "nvme_version": "1.4" 00:19:42.443 }, 00:19:42.443 "ns_data": { 00:19:42.443 "id": 1, 00:19:42.443 "can_share": false 00:19:42.443 } 00:19:42.443 } 00:19:42.443 ], 00:19:42.443 "mp_policy": "active_passive" 00:19:42.443 } 00:19:42.443 } 00:19:42.443 ]' 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:42.443 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:42.702 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:42.702 09:25:28 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:42.958 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e59cf01f-c076-4ace-b5c2-a23064d89771 00:19:42.958 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e59cf01f-c076-4ace-b5c2-a23064d89771 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:43.214 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:43.473 { 00:19:43.473 "name": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:43.473 "aliases": [ 00:19:43.473 "lvs/nvme0n1p0" 00:19:43.473 ], 00:19:43.473 "product_name": "Logical Volume", 00:19:43.473 "block_size": 4096, 00:19:43.473 "num_blocks": 26476544, 00:19:43.473 "uuid": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:43.473 "assigned_rate_limits": { 00:19:43.473 "rw_ios_per_sec": 0, 00:19:43.473 "rw_mbytes_per_sec": 0, 00:19:43.473 "r_mbytes_per_sec": 0, 00:19:43.473 "w_mbytes_per_sec": 0 00:19:43.473 }, 00:19:43.473 "claimed": false, 00:19:43.473 "zoned": false, 00:19:43.473 "supported_io_types": { 00:19:43.473 "read": true, 00:19:43.473 "write": true, 00:19:43.473 "unmap": true, 00:19:43.473 "flush": false, 00:19:43.473 "reset": true, 00:19:43.473 "nvme_admin": false, 00:19:43.473 "nvme_io": false, 00:19:43.473 "nvme_io_md": false, 00:19:43.473 "write_zeroes": true, 00:19:43.473 "zcopy": false, 00:19:43.473 "get_zone_info": false, 00:19:43.473 "zone_management": false, 00:19:43.473 "zone_append": false, 00:19:43.473 "compare": false, 00:19:43.473 "compare_and_write": false, 00:19:43.473 "abort": false, 00:19:43.473 "seek_hole": true, 00:19:43.473 "seek_data": true, 00:19:43.473 "copy": false, 00:19:43.473 "nvme_iov_md": false 00:19:43.473 }, 00:19:43.473 "driver_specific": { 00:19:43.473 "lvol": { 00:19:43.473 "lvol_store_uuid": "e59cf01f-c076-4ace-b5c2-a23064d89771", 00:19:43.473 "base_bdev": "nvme0n1", 00:19:43.473 "thin_provision": true, 00:19:43.473 "num_allocated_clusters": 0, 00:19:43.473 "snapshot": false, 00:19:43.473 "clone": false, 00:19:43.473 "esnap_clone": false 00:19:43.473 } 00:19:43.473 } 00:19:43.473 } 00:19:43.473 ]' 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:43.473 09:25:29 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:43.732 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:43.991 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:43.991 { 00:19:43.991 "name": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:43.991 "aliases": [ 00:19:43.991 "lvs/nvme0n1p0" 00:19:43.991 ], 00:19:43.991 "product_name": "Logical Volume", 00:19:43.991 "block_size": 4096, 00:19:43.991 "num_blocks": 26476544, 00:19:43.991 "uuid": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:43.991 "assigned_rate_limits": { 00:19:43.991 "rw_ios_per_sec": 0, 00:19:43.991 "rw_mbytes_per_sec": 0, 00:19:43.991 "r_mbytes_per_sec": 0, 00:19:43.991 "w_mbytes_per_sec": 0 00:19:43.991 }, 00:19:43.991 "claimed": false, 00:19:43.991 "zoned": false, 00:19:43.991 "supported_io_types": { 00:19:43.991 "read": true, 00:19:43.991 "write": true, 00:19:43.991 "unmap": true, 00:19:43.991 "flush": false, 00:19:43.991 "reset": true, 00:19:43.991 "nvme_admin": false, 00:19:43.991 "nvme_io": false, 00:19:43.991 "nvme_io_md": false, 00:19:43.991 "write_zeroes": true, 00:19:43.991 "zcopy": false, 00:19:43.991 "get_zone_info": false, 00:19:43.991 "zone_management": false, 00:19:43.991 "zone_append": false, 00:19:43.991 "compare": false, 00:19:43.991 "compare_and_write": false, 00:19:43.991 "abort": false, 00:19:43.991 "seek_hole": true, 00:19:43.991 "seek_data": true, 00:19:43.991 "copy": false, 00:19:43.991 "nvme_iov_md": false 00:19:43.991 }, 00:19:43.991 "driver_specific": { 00:19:43.991 "lvol": { 00:19:43.991 "lvol_store_uuid": "e59cf01f-c076-4ace-b5c2-a23064d89771", 00:19:43.991 "base_bdev": "nvme0n1", 00:19:43.991 "thin_provision": true, 00:19:43.991 "num_allocated_clusters": 0, 00:19:43.991 "snapshot": false, 00:19:43.991 "clone": false, 00:19:43.991 "esnap_clone": false 00:19:43.991 } 00:19:43.991 } 00:19:43.991 } 00:19:43.991 ]' 00:19:43.991 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:44.249 09:25:30 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:44.507 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:19:44.507 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b2e26d0-c41c-4af8-a54a-48210e1a0022 00:19:44.766 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:44.766 { 00:19:44.766 "name": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:44.766 "aliases": [ 00:19:44.766 "lvs/nvme0n1p0" 00:19:44.766 ], 00:19:44.766 "product_name": "Logical Volume", 00:19:44.766 "block_size": 4096, 00:19:44.766 "num_blocks": 26476544, 00:19:44.766 "uuid": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:44.766 "assigned_rate_limits": { 00:19:44.766 "rw_ios_per_sec": 0, 00:19:44.766 "rw_mbytes_per_sec": 0, 00:19:44.766 "r_mbytes_per_sec": 0, 00:19:44.766 "w_mbytes_per_sec": 0 00:19:44.766 }, 00:19:44.766 "claimed": false, 00:19:44.766 "zoned": false, 00:19:44.766 "supported_io_types": { 00:19:44.766 "read": true, 00:19:44.766 "write": true, 00:19:44.766 "unmap": true, 00:19:44.766 "flush": false, 00:19:44.766 "reset": true, 00:19:44.766 "nvme_admin": false, 00:19:44.766 "nvme_io": false, 00:19:44.766 "nvme_io_md": false, 00:19:44.766 "write_zeroes": true, 00:19:44.766 "zcopy": false, 00:19:44.766 "get_zone_info": false, 00:19:44.766 "zone_management": false, 00:19:44.766 "zone_append": false, 00:19:44.766 "compare": false, 00:19:44.766 "compare_and_write": false, 00:19:44.766 "abort": false, 00:19:44.766 "seek_hole": true, 00:19:44.766 "seek_data": true, 00:19:44.766 "copy": false, 00:19:44.766 "nvme_iov_md": false 00:19:44.766 }, 00:19:44.766 "driver_specific": { 00:19:44.766 "lvol": { 00:19:44.766 "lvol_store_uuid": "e59cf01f-c076-4ace-b5c2-a23064d89771", 00:19:44.766 "base_bdev": "nvme0n1", 00:19:44.766 "thin_provision": true, 00:19:44.766 "num_allocated_clusters": 0, 00:19:44.766 "snapshot": false, 00:19:44.766 "clone": false, 00:19:44.766 "esnap_clone": false 00:19:44.766 } 00:19:44.766 } 00:19:44.766 } 00:19:44.766 ]' 00:19:44.766 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:44.766 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:19:44.766 09:25:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:44.766 09:25:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:44.766 09:25:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:44.766 09:25:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:19:44.766 09:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:44.766 09:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:44.766 09:25:31 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7b2e26d0-c41c-4af8-a54a-48210e1a0022 -c nvc0n1p0 --l2p_dram_limit 60 00:19:45.028 [2024-07-12 09:25:31.282732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.282808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:45.028 [2024-07-12 09:25:31.282831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:45.028 [2024-07-12 09:25:31.282845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.282937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.282960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.028 [2024-07-12 09:25:31.282973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:45.028 [2024-07-12 09:25:31.282987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.283023] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:45.028 [2024-07-12 09:25:31.284074] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:45.028 [2024-07-12 09:25:31.284116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.284139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.028 [2024-07-12 09:25:31.284153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:19:45.028 [2024-07-12 09:25:31.284167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.284381] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6d90459a-a6cc-45d4-ae95-1f37d398e331 00:19:45.028 [2024-07-12 09:25:31.285497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.285536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:45.028 [2024-07-12 09:25:31.285556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:45.028 [2024-07-12 09:25:31.285568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.290353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.290403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.028 [2024-07-12 09:25:31.290424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:19:45.028 [2024-07-12 09:25:31.290440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.290589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.290614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.028 [2024-07-12 09:25:31.290631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:45.028 [2024-07-12 09:25:31.290642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.290736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.290754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:45.028 [2024-07-12 09:25:31.290770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:45.028 [2024-07-12 09:25:31.290782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.290829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.028 [2024-07-12 09:25:31.295420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.295466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.028 [2024-07-12 09:25:31.295486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 00:19:45.028 [2024-07-12 09:25:31.295511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.295570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.295589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:45.028 [2024-07-12 09:25:31.295603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:45.028 [2024-07-12 09:25:31.295616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.295704] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:45.028 [2024-07-12 09:25:31.295889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:45.028 [2024-07-12 09:25:31.295923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:45.028 [2024-07-12 09:25:31.295946] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:45.028 [2024-07-12 09:25:31.295962] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:45.028 [2024-07-12 09:25:31.295979] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:45.028 [2024-07-12 09:25:31.295992] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:45.028 [2024-07-12 09:25:31.296006] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:45.028 [2024-07-12 09:25:31.296018] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:45.028 [2024-07-12 09:25:31.296033] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:45.028 [2024-07-12 09:25:31.296046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.296069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:45.028 [2024-07-12 09:25:31.296081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:19:45.028 [2024-07-12 09:25:31.296096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.296221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.028 [2024-07-12 09:25:31.296244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:45.028 [2024-07-12 09:25:31.296258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:45.028 [2024-07-12 09:25:31.296271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.028 [2024-07-12 09:25:31.296392] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:45.028 [2024-07-12 09:25:31.296427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:45.028 [2024-07-12 09:25:31.296441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:45.028 [2024-07-12 09:25:31.296481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:45.028 [2024-07-12 09:25:31.296516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.028 [2024-07-12 09:25:31.296540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:45.028 [2024-07-12 09:25:31.296553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:45.028 [2024-07-12 09:25:31.296574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.028 [2024-07-12 09:25:31.296589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:45.028 [2024-07-12 09:25:31.296600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:45.028 [2024-07-12 09:25:31.296613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:45.028 [2024-07-12 09:25:31.296639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:45.028 [2024-07-12 09:25:31.296673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:45.028 [2024-07-12 09:25:31.296713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:45.028 [2024-07-12 09:25:31.296748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:45.028 [2024-07-12 09:25:31.296784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.028 [2024-07-12 09:25:31.296807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:45.028 [2024-07-12 09:25:31.296817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:45.028 [2024-07-12 09:25:31.296832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.029 [2024-07-12 09:25:31.296843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:45.029 [2024-07-12 09:25:31.296856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:45.029 [2024-07-12 09:25:31.296866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.029 [2024-07-12 09:25:31.296879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:45.029 [2024-07-12 09:25:31.296890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:45.029 [2024-07-12 09:25:31.296903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.029 [2024-07-12 09:25:31.296914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:45.029 [2024-07-12 09:25:31.296927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:45.029 [2024-07-12 09:25:31.296937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.029 [2024-07-12 09:25:31.296949] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:45.029 [2024-07-12 09:25:31.296962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:45.029 [2024-07-12 09:25:31.296998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.029 [2024-07-12 09:25:31.297010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.029 [2024-07-12 09:25:31.297025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:45.029 [2024-07-12 09:25:31.297036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:45.029 [2024-07-12 09:25:31.297051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:45.029 [2024-07-12 09:25:31.297062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:45.029 [2024-07-12 09:25:31.297075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:45.029 [2024-07-12 09:25:31.297086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:45.029 [2024-07-12 09:25:31.297109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:45.029 [2024-07-12 09:25:31.297126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:45.029 [2024-07-12 09:25:31.297155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:45.029 [2024-07-12 09:25:31.297169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:45.029 [2024-07-12 09:25:31.297180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:45.029 [2024-07-12 09:25:31.297211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:45.029 [2024-07-12 09:25:31.297223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:45.029 [2024-07-12 09:25:31.297237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:45.029 [2024-07-12 09:25:31.297249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:45.029 [2024-07-12 09:25:31.297264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:45.029 [2024-07-12 09:25:31.297276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:45.029 [2024-07-12 09:25:31.297344] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:45.029 [2024-07-12 09:25:31.297359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:45.029 [2024-07-12 09:25:31.297386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:45.029 [2024-07-12 09:25:31.297400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:45.029 [2024-07-12 09:25:31.297411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:45.029 [2024-07-12 09:25:31.297426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.029 [2024-07-12 09:25:31.297438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:45.029 [2024-07-12 09:25:31.297453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:19:45.029 [2024-07-12 09:25:31.297465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.029 [2024-07-12 09:25:31.297542] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:45.029 [2024-07-12 09:25:31.297560] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:48.305 [2024-07-12 09:25:34.216330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.305 [2024-07-12 09:25:34.216417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:48.305 [2024-07-12 09:25:34.216444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2918.796 ms 00:19:48.305 [2024-07-12 09:25:34.216457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.305 [2024-07-12 09:25:34.249156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.305 [2024-07-12 09:25:34.249236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.305 [2024-07-12 09:25:34.249261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.413 ms 00:19:48.305 [2024-07-12 09:25:34.249275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.305 [2024-07-12 09:25:34.249479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.305 [2024-07-12 09:25:34.249500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.306 [2024-07-12 09:25:34.249517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:48.306 [2024-07-12 09:25:34.249528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.298208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.298280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.306 [2024-07-12 09:25:34.298306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.607 ms 00:19:48.306 [2024-07-12 09:25:34.298318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.298403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.298420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.306 [2024-07-12 09:25:34.298436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.306 [2024-07-12 09:25:34.298449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.298855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.298886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.306 [2024-07-12 09:25:34.298906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:19:48.306 [2024-07-12 09:25:34.298919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.299101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.299128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.306 [2024-07-12 09:25:34.299145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:19:48.306 [2024-07-12 09:25:34.299157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.320375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.320440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.306 [2024-07-12 09:25:34.320464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.163 ms 00:19:48.306 [2024-07-12 09:25:34.320479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.333980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:48.306 [2024-07-12 09:25:34.348359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.348438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.306 [2024-07-12 09:25:34.348464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.724 ms 00:19:48.306 [2024-07-12 09:25:34.348480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.406289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.406385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:48.306 [2024-07-12 09:25:34.406408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.735 ms 00:19:48.306 [2024-07-12 09:25:34.406423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.406715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.406750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.306 [2024-07-12 09:25:34.406767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:19:48.306 [2024-07-12 09:25:34.406785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.439544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.439622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:48.306 [2024-07-12 09:25:34.439645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.658 ms 00:19:48.306 [2024-07-12 09:25:34.439660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.471109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.471198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:48.306 [2024-07-12 09:25:34.471223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.381 ms 00:19:48.306 [2024-07-12 09:25:34.471239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.472008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.472046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.306 [2024-07-12 09:25:34.472062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:19:48.306 [2024-07-12 09:25:34.472077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.561553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.561655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:48.306 [2024-07-12 09:25:34.561681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.383 ms 00:19:48.306 [2024-07-12 09:25:34.561700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.594557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.594636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:48.306 [2024-07-12 09:25:34.594658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.776 ms 00:19:48.306 [2024-07-12 09:25:34.594674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.306 [2024-07-12 09:25:34.627062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.306 [2024-07-12 09:25:34.627148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:48.306 [2024-07-12 09:25:34.627169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.302 ms 00:19:48.306 [2024-07-12 09:25:34.627193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.564 [2024-07-12 09:25:34.659575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.564 [2024-07-12 09:25:34.659655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.564 [2024-07-12 09:25:34.659678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.295 ms 00:19:48.564 [2024-07-12 09:25:34.659693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.564 [2024-07-12 09:25:34.659789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.564 [2024-07-12 09:25:34.659815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.564 [2024-07-12 09:25:34.659833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:48.564 [2024-07-12 09:25:34.659852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.564 [2024-07-12 09:25:34.659998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.564 [2024-07-12 09:25:34.660038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.564 [2024-07-12 09:25:34.660052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:48.564 [2024-07-12 09:25:34.660066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.564 [2024-07-12 09:25:34.661412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3378.139 ms, result 0 00:19:48.564 { 00:19:48.564 "name": "ftl0", 00:19:48.564 "uuid": "6d90459a-a6cc-45d4-ae95-1f37d398e331" 00:19:48.564 } 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:48.564 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:48.822 09:25:34 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:48.822 [ 00:19:48.822 { 00:19:48.822 "name": "ftl0", 00:19:48.822 "aliases": [ 00:19:48.822 "6d90459a-a6cc-45d4-ae95-1f37d398e331" 00:19:48.822 ], 00:19:48.822 "product_name": "FTL disk", 00:19:48.822 "block_size": 4096, 00:19:48.822 "num_blocks": 20971520, 00:19:48.822 "uuid": "6d90459a-a6cc-45d4-ae95-1f37d398e331", 00:19:48.822 "assigned_rate_limits": { 00:19:48.822 "rw_ios_per_sec": 0, 00:19:48.822 "rw_mbytes_per_sec": 0, 00:19:48.822 "r_mbytes_per_sec": 0, 00:19:48.822 "w_mbytes_per_sec": 0 00:19:48.822 }, 00:19:48.822 "claimed": false, 00:19:48.822 "zoned": false, 00:19:48.822 "supported_io_types": { 00:19:48.822 "read": true, 00:19:48.822 "write": true, 00:19:48.822 "unmap": true, 00:19:48.822 "flush": true, 00:19:48.822 "reset": false, 00:19:48.822 "nvme_admin": false, 00:19:48.822 "nvme_io": false, 00:19:48.822 "nvme_io_md": false, 00:19:48.822 "write_zeroes": true, 00:19:48.822 "zcopy": false, 00:19:48.822 "get_zone_info": false, 00:19:48.822 "zone_management": false, 00:19:48.822 "zone_append": false, 00:19:48.822 "compare": false, 00:19:48.822 "compare_and_write": false, 00:19:48.822 "abort": false, 00:19:48.822 "seek_hole": false, 00:19:48.822 "seek_data": false, 00:19:48.822 "copy": false, 00:19:48.822 "nvme_iov_md": false 00:19:48.822 }, 00:19:48.822 "driver_specific": { 00:19:48.822 "ftl": { 00:19:48.822 "base_bdev": "7b2e26d0-c41c-4af8-a54a-48210e1a0022", 00:19:48.822 "cache": "nvc0n1p0" 00:19:48.822 } 00:19:48.822 } 00:19:48.822 } 00:19:48.822 ] 00:19:48.822 09:25:35 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:19:48.822 09:25:35 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:48.822 09:25:35 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:49.080 09:25:35 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:49.080 09:25:35 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:49.338 [2024-07-12 09:25:35.630337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.338 [2024-07-12 09:25:35.630403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.338 [2024-07-12 09:25:35.630431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.338 [2024-07-12 09:25:35.630447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.338 [2024-07-12 09:25:35.630507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:49.338 [2024-07-12 09:25:35.633879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.338 [2024-07-12 09:25:35.633923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.338 [2024-07-12 09:25:35.633941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:19:49.338 [2024-07-12 09:25:35.633955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.338 [2024-07-12 09:25:35.634508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.338 [2024-07-12 09:25:35.634552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.338 [2024-07-12 09:25:35.634570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:19:49.338 [2024-07-12 09:25:35.634584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.338 [2024-07-12 09:25:35.637907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.338 [2024-07-12 09:25:35.637944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.338 [2024-07-12 09:25:35.637960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:19:49.338 [2024-07-12 09:25:35.637973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.338 [2024-07-12 09:25:35.644702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.338 [2024-07-12 09:25:35.644743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.338 [2024-07-12 09:25:35.644759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:19:49.338 [2024-07-12 09:25:35.644773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.338 [2024-07-12 09:25:35.676774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.338 [2024-07-12 09:25:35.676865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.338 [2024-07-12 09:25:35.676887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.886 ms 00:19:49.338 [2024-07-12 09:25:35.676902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.696393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.598 [2024-07-12 09:25:35.696467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.598 [2024-07-12 09:25:35.696492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.398 ms 00:19:49.598 [2024-07-12 09:25:35.696507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.696785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.598 [2024-07-12 09:25:35.696822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.598 [2024-07-12 09:25:35.696839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:19:49.598 [2024-07-12 09:25:35.696853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.728327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.598 [2024-07-12 09:25:35.728395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:49.598 [2024-07-12 09:25:35.728416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.437 ms 00:19:49.598 [2024-07-12 09:25:35.728430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.759714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.598 [2024-07-12 09:25:35.759783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:49.598 [2024-07-12 09:25:35.759804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.218 ms 00:19:49.598 [2024-07-12 09:25:35.759819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.790712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.598 [2024-07-12 09:25:35.790781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:49.598 [2024-07-12 09:25:35.790802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.824 ms 00:19:49.598 [2024-07-12 09:25:35.790816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.821862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.598 [2024-07-12 09:25:35.821931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:49.598 [2024-07-12 09:25:35.821952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.886 ms 00:19:49.598 [2024-07-12 09:25:35.821967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.598 [2024-07-12 09:25:35.822034] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:49.598 [2024-07-12 09:25:35.822064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:49.598 [2024-07-12 09:25:35.822079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.822997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:49.599 [2024-07-12 09:25:35.823463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:49.600 [2024-07-12 09:25:35.823487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:49.600 [2024-07-12 09:25:35.823499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:49.600 [2024-07-12 09:25:35.823525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:49.600 [2024-07-12 09:25:35.823538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d90459a-a6cc-45d4-ae95-1f37d398e331 00:19:49.600 [2024-07-12 09:25:35.823552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:49.600 [2024-07-12 09:25:35.823563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:49.600 [2024-07-12 09:25:35.823582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:49.600 [2024-07-12 09:25:35.823594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:49.600 [2024-07-12 09:25:35.823607] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:49.600 [2024-07-12 09:25:35.823618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:49.600 [2024-07-12 09:25:35.823631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:49.600 [2024-07-12 09:25:35.823642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:49.600 [2024-07-12 09:25:35.823654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:49.600 [2024-07-12 09:25:35.823666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.600 [2024-07-12 09:25:35.823679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:49.600 [2024-07-12 09:25:35.823692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:19:49.600 [2024-07-12 09:25:35.823706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.600 [2024-07-12 09:25:35.840449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.600 [2024-07-12 09:25:35.840503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:49.600 [2024-07-12 09:25:35.840522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.660 ms 00:19:49.600 [2024-07-12 09:25:35.840536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.600 [2024-07-12 09:25:35.840980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.600 [2024-07-12 09:25:35.841011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:49.600 [2024-07-12 09:25:35.841026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:19:49.600 [2024-07-12 09:25:35.841040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.600 [2024-07-12 09:25:35.899211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.600 [2024-07-12 09:25:35.899282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.600 [2024-07-12 09:25:35.899304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.600 [2024-07-12 09:25:35.899319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.600 [2024-07-12 09:25:35.899433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.600 [2024-07-12 09:25:35.899457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.600 [2024-07-12 09:25:35.899470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.600 [2024-07-12 09:25:35.899485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.600 [2024-07-12 09:25:35.899632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.600 [2024-07-12 09:25:35.899663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.600 [2024-07-12 09:25:35.899677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.600 [2024-07-12 09:25:35.899691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.600 [2024-07-12 09:25:35.899727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.600 [2024-07-12 09:25:35.899747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.600 [2024-07-12 09:25:35.899760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.600 [2024-07-12 09:25:35.899773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.005138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.005220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.859 [2024-07-12 09:25:36.005240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.005255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.089793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.089869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.859 [2024-07-12 09:25:36.089890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.089904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.090022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.090047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.859 [2024-07-12 09:25:36.090064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.090078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.090156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.090207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.859 [2024-07-12 09:25:36.090225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.090239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.090380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.090417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.859 [2024-07-12 09:25:36.090434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.090449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.090519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.090543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:49.859 [2024-07-12 09:25:36.090557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.090571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.090634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.090664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.859 [2024-07-12 09:25:36.090678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.090695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.090757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.859 [2024-07-12 09:25:36.090782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.859 [2024-07-12 09:25:36.090795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.859 [2024-07-12 09:25:36.090809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.859 [2024-07-12 09:25:36.091009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.651 ms, result 0 00:19:49.859 true 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79468 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79468 ']' 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79468 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79468 00:19:49.859 killing process with pid 79468 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79468' 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79468 00:19:49.859 09:25:36 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79468 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.140 09:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:55.140 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:55.140 fio-3.35 00:19:55.140 Starting 1 thread 00:20:00.407 00:20:00.407 test: (groupid=0, jobs=1): err= 0: pid=79673: Fri Jul 12 09:25:45 2024 00:20:00.407 read: IOPS=1025, BW=68.1MiB/s (71.4MB/s)(255MiB/3739msec) 00:20:00.407 slat (nsec): min=5906, max=47710, avg=7672.85, stdev=3202.06 00:20:00.407 clat (usec): min=285, max=933, avg=435.45, stdev=52.40 00:20:00.407 lat (usec): min=302, max=940, avg=443.12, stdev=53.18 00:20:00.407 clat percentiles (usec): 00:20:00.407 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 379], 00:20:00.407 | 30.00th=[ 392], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 445], 00:20:00.407 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 510], 95.00th=[ 529], 00:20:00.407 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 676], 99.95th=[ 750], 00:20:00.407 | 99.99th=[ 938] 00:20:00.407 write: IOPS=1032, BW=68.6MiB/s (71.9MB/s)(256MiB/3735msec); 0 zone resets 00:20:00.407 slat (usec): min=20, max=104, avg=24.24, stdev= 5.16 00:20:00.407 clat (usec): min=350, max=921, avg=491.37, stdev=59.70 00:20:00.407 lat (usec): min=373, max=956, avg=515.60, stdev=59.78 00:20:00.407 clat percentiles (usec): 00:20:00.407 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 461], 00:20:00.407 | 30.00th=[ 469], 40.00th=[ 474], 50.00th=[ 478], 60.00th=[ 486], 00:20:00.407 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 594], 00:20:00.407 | 99.00th=[ 693], 99.50th=[ 758], 99.90th=[ 824], 99.95th=[ 873], 00:20:00.407 | 99.99th=[ 922] 00:20:00.407 bw ( KiB/s): min=68952, max=72080, per=100.00%, avg=70331.43, stdev=1279.24, samples=7 00:20:00.407 iops : min= 1014, max= 1060, avg=1034.29, stdev=18.81, samples=7 00:20:00.407 lat (usec) : 500=77.16%, 750=22.54%, 1000=0.30% 00:20:00.407 cpu : usr=99.30%, sys=0.11%, ctx=28, majf=0, minf=1171 00:20:00.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.407 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:00.407 00:20:00.407 Run status group 0 (all jobs): 00:20:00.407 READ: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=255MiB (267MB), run=3739-3739msec 00:20:00.407 WRITE: bw=68.6MiB/s (71.9MB/s), 68.6MiB/s-68.6MiB/s (71.9MB/s-71.9MB/s), io=256MiB (269MB), run=3735-3735msec 00:20:01.343 ----------------------------------------------------- 00:20:01.343 Suppressions used: 00:20:01.343 count bytes template 00:20:01.343 1 5 /usr/src/fio/parse.c 00:20:01.343 1 8 libtcmalloc_minimal.so 00:20:01.343 1 904 libcrypto.so 00:20:01.343 ----------------------------------------------------- 00:20:01.343 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.343 09:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:01.604 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:01.604 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:01.604 fio-3.35 00:20:01.604 Starting 2 threads 00:20:33.683 00:20:33.683 first_half: (groupid=0, jobs=1): err= 0: pid=79782: Fri Jul 12 09:26:18 2024 00:20:33.683 read: IOPS=2245, BW=8981KiB/s (9197kB/s)(255MiB/29122msec) 00:20:33.683 slat (usec): min=4, max=1527, avg= 7.83, stdev=10.59 00:20:33.683 clat (usec): min=885, max=329059, avg=42722.88, stdev=23508.64 00:20:33.683 lat (usec): min=900, max=329069, avg=42730.70, stdev=23508.96 00:20:33.683 clat percentiles (msec): 00:20:33.683 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:20:33.683 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:20:33.683 | 70.00th=[ 41], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 56], 00:20:33.683 | 99.00th=[ 180], 99.50th=[ 209], 99.90th=[ 275], 99.95th=[ 300], 00:20:33.683 | 99.99th=[ 326] 00:20:33.683 write: IOPS=2348, BW=9394KiB/s (9620kB/s)(256MiB/27904msec); 0 zone resets 00:20:33.683 slat (usec): min=5, max=3885, avg= 9.99, stdev=17.57 00:20:33.683 clat (usec): min=450, max=165810, avg=14206.15, stdev=25098.14 00:20:33.683 lat (usec): min=468, max=165822, avg=14216.14, stdev=25098.55 00:20:33.683 clat percentiles (usec): 00:20:33.683 | 1.00th=[ 971], 5.00th=[ 1303], 10.00th=[ 1532], 20.00th=[ 2114], 00:20:33.683 | 30.00th=[ 3785], 40.00th=[ 5538], 50.00th=[ 6456], 60.00th=[ 7439], 00:20:33.683 | 70.00th=[ 8717], 80.00th=[ 13042], 90.00th=[ 32113], 95.00th=[ 88605], 00:20:33.683 | 99.00th=[110625], 99.50th=[129500], 99.90th=[156238], 99.95th=[160433], 00:20:33.683 | 99.99th=[162530] 00:20:33.683 bw ( KiB/s): min= 624, max=41648, per=90.01%, avg=16912.94, stdev=11789.80, samples=31 00:20:33.683 iops : min= 156, max=10412, avg=4228.23, stdev=2947.44, samples=31 00:20:33.683 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.56% 00:20:33.683 lat (msec) : 2=8.82%, 4=6.33%, 10=22.49%, 20=8.02%, 50=46.60% 00:20:33.683 lat (msec) : 100=4.77%, 250=2.26%, 500=0.11% 00:20:33.683 cpu : usr=98.26%, sys=0.45%, ctx=83, majf=0, minf=5518 00:20:33.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:33.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.683 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:33.683 issued rwts: total=65388,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:33.683 second_half: (groupid=0, jobs=1): err= 0: pid=79783: Fri Jul 12 09:26:18 2024 00:20:33.683 read: IOPS=2254, BW=9016KiB/s (9233kB/s)(255MiB/28948msec) 00:20:33.683 slat (usec): min=4, max=145, avg= 7.57, stdev= 1.98 00:20:33.683 clat (usec): min=819, max=331620, avg=43509.10, stdev=22057.13 00:20:33.683 lat (usec): min=829, max=331628, avg=43516.67, stdev=22057.24 00:20:33.683 clat percentiles (msec): 00:20:33.683 | 1.00th=[ 10], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:20:33.683 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:20:33.683 | 70.00th=[ 41], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 60], 00:20:33.683 | 99.00th=[ 167], 99.50th=[ 186], 99.90th=[ 251], 99.95th=[ 264], 00:20:33.683 | 99.99th=[ 305] 00:20:33.683 write: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(256MiB/23761msec); 0 zone resets 00:20:33.683 slat (usec): min=5, max=3811, avg= 9.81, stdev=16.53 00:20:33.683 clat (usec): min=496, max=166058, avg=13192.18, stdev=24607.06 00:20:33.683 lat (usec): min=503, max=166067, avg=13202.00, stdev=24607.32 00:20:33.683 clat percentiles (usec): 00:20:33.683 | 1.00th=[ 1057], 5.00th=[ 1336], 10.00th=[ 1532], 20.00th=[ 1909], 00:20:33.683 | 30.00th=[ 2638], 40.00th=[ 4146], 50.00th=[ 5735], 60.00th=[ 6849], 00:20:33.683 | 70.00th=[ 8225], 80.00th=[ 13173], 90.00th=[ 17171], 95.00th=[ 86508], 00:20:33.683 | 99.00th=[107480], 99.50th=[125305], 99.90th=[160433], 99.95th=[160433], 00:20:33.683 | 99.99th=[162530] 00:20:33.683 bw ( KiB/s): min= 144, max=37904, per=99.65%, avg=18724.57, stdev=9766.87, samples=28 00:20:33.683 iops : min= 36, max= 9476, avg=4681.14, stdev=2441.72, samples=28 00:20:33.683 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.31% 00:20:33.683 lat (msec) : 2=10.67%, 4=8.93%, 10=18.19%, 20=8.11%, 50=46.42% 00:20:33.683 lat (msec) : 100=4.77%, 250=2.50%, 500=0.05% 00:20:33.683 cpu : usr=99.06%, sys=0.17%, ctx=1619, majf=0, minf=5601 00:20:33.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:33.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.683 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:33.683 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:33.683 00:20:33.683 Run status group 0 (all jobs): 00:20:33.683 READ: bw=17.5MiB/s (18.4MB/s), 8981KiB/s-9016KiB/s (9197kB/s-9233kB/s), io=510MiB (535MB), run=28948-29122msec 00:20:33.683 WRITE: bw=18.3MiB/s (19.2MB/s), 9394KiB/s-10.8MiB/s (9620kB/s-11.3MB/s), io=512MiB (537MB), run=23761-27904msec 00:20:34.251 ----------------------------------------------------- 00:20:34.251 Suppressions used: 00:20:34.251 count bytes template 00:20:34.251 2 10 /usr/src/fio/parse.c 00:20:34.251 4 384 /usr/src/fio/iolog.c 00:20:34.251 1 8 libtcmalloc_minimal.so 00:20:34.251 1 904 libcrypto.so 00:20:34.251 ----------------------------------------------------- 00:20:34.251 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:34.251 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.509 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:34.509 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:34.509 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:34.509 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.509 09:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:34.509 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:34.509 fio-3.35 00:20:34.509 Starting 1 thread 00:20:52.615 00:20:52.615 test: (groupid=0, jobs=1): err= 0: pid=80172: Fri Jul 12 09:26:38 2024 00:20:52.615 read: IOPS=6328, BW=24.7MiB/s (25.9MB/s)(255MiB/10303msec) 00:20:52.615 slat (usec): min=4, max=124, avg= 6.97, stdev= 1.78 00:20:52.615 clat (usec): min=764, max=47677, avg=20215.53, stdev=1952.53 00:20:52.615 lat (usec): min=769, max=47684, avg=20222.50, stdev=1952.60 00:20:52.615 clat percentiles (usec): 00:20:52.615 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:20:52.615 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:20:52.615 | 70.00th=[20055], 80.00th=[20579], 90.00th=[22152], 95.00th=[22938], 00:20:52.615 | 99.00th=[26608], 99.50th=[27395], 99.90th=[45876], 99.95th=[46924], 00:20:52.615 | 99.99th=[47449] 00:20:52.615 write: IOPS=11.3k, BW=44.2MiB/s (46.3MB/s)(256MiB/5798msec); 0 zone resets 00:20:52.615 slat (usec): min=6, max=330, avg= 9.48, stdev= 4.39 00:20:52.615 clat (usec): min=683, max=69770, avg=11261.29, stdev=14220.84 00:20:52.615 lat (usec): min=694, max=69780, avg=11270.77, stdev=14220.85 00:20:52.615 clat percentiles (usec): 00:20:52.615 | 1.00th=[ 1004], 5.00th=[ 1205], 10.00th=[ 1336], 20.00th=[ 1532], 00:20:52.615 | 30.00th=[ 1745], 40.00th=[ 2311], 50.00th=[ 7373], 60.00th=[ 8455], 00:20:52.615 | 70.00th=[ 9765], 80.00th=[11338], 90.00th=[41157], 95.00th=[44827], 00:20:52.615 | 99.00th=[49021], 99.50th=[50594], 99.90th=[53740], 99.95th=[57410], 00:20:52.615 | 99.99th=[64750] 00:20:52.615 bw ( KiB/s): min=23552, max=63048, per=96.63%, avg=43690.67, stdev=10931.12, samples=12 00:20:52.615 iops : min= 5888, max=15762, avg=10922.67, stdev=2732.78, samples=12 00:20:52.615 lat (usec) : 750=0.01%, 1000=0.50% 00:20:52.615 lat (msec) : 2=18.11%, 4=2.26%, 10=15.44%, 20=40.56%, 50=22.81% 00:20:52.615 lat (msec) : 100=0.32% 00:20:52.615 cpu : usr=98.96%, sys=0.24%, ctx=33, majf=0, minf=5568 00:20:52.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:52.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.615 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.615 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.615 00:20:52.615 Run status group 0 (all jobs): 00:20:52.615 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=255MiB (267MB), run=10303-10303msec 00:20:52.615 WRITE: bw=44.2MiB/s (46.3MB/s), 44.2MiB/s-44.2MiB/s (46.3MB/s-46.3MB/s), io=256MiB (268MB), run=5798-5798msec 00:20:53.550 ----------------------------------------------------- 00:20:53.550 Suppressions used: 00:20:53.550 count bytes template 00:20:53.550 1 5 /usr/src/fio/parse.c 00:20:53.550 2 192 /usr/src/fio/iolog.c 00:20:53.550 1 8 libtcmalloc_minimal.so 00:20:53.550 1 904 libcrypto.so 00:20:53.550 ----------------------------------------------------- 00:20:53.550 00:20:53.550 09:26:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:53.550 09:26:39 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.550 09:26:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:53.810 Remove shared memory files 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62335 /dev/shm/spdk_tgt_trace.pid78401 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:53.810 00:20:53.810 real 1m13.348s 00:20:53.810 user 2m41.916s 00:20:53.810 sys 0m3.556s 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:53.810 09:26:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:53.810 ************************************ 00:20:53.810 END TEST ftl_fio_basic 00:20:53.810 ************************************ 00:20:53.810 09:26:39 ftl -- common/autotest_common.sh@1142 -- # return 0 00:20:53.810 09:26:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:53.810 09:26:39 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:53.810 09:26:39 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.810 09:26:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:53.810 ************************************ 00:20:53.810 START TEST ftl_bdevperf 00:20:53.810 ************************************ 00:20:53.810 09:26:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:53.810 * Looking for test storage... 00:20:53.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80422 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80422 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80422 ']' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.810 09:26:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:54.069 [2024-07-12 09:26:40.225626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:54.069 [2024-07-12 09:26:40.225824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80422 ] 00:20:54.069 [2024-07-12 09:26:40.410177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.328 [2024-07-12 09:26:40.636279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:54.895 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:55.462 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:55.463 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:55.463 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:55.463 { 00:20:55.463 "name": "nvme0n1", 00:20:55.463 "aliases": [ 00:20:55.463 "0a439486-c718-4156-829f-b8faa6a7ea8a" 00:20:55.463 ], 00:20:55.463 "product_name": "NVMe disk", 00:20:55.463 "block_size": 4096, 00:20:55.463 "num_blocks": 1310720, 00:20:55.463 "uuid": "0a439486-c718-4156-829f-b8faa6a7ea8a", 00:20:55.463 "assigned_rate_limits": { 00:20:55.463 "rw_ios_per_sec": 0, 00:20:55.463 "rw_mbytes_per_sec": 0, 00:20:55.463 "r_mbytes_per_sec": 0, 00:20:55.463 "w_mbytes_per_sec": 0 00:20:55.463 }, 00:20:55.463 "claimed": true, 00:20:55.463 "claim_type": "read_many_write_one", 00:20:55.463 "zoned": false, 00:20:55.463 "supported_io_types": { 00:20:55.463 "read": true, 00:20:55.463 "write": true, 00:20:55.463 "unmap": true, 00:20:55.463 "flush": true, 00:20:55.463 "reset": true, 00:20:55.463 "nvme_admin": true, 00:20:55.463 "nvme_io": true, 00:20:55.463 "nvme_io_md": false, 00:20:55.463 "write_zeroes": true, 00:20:55.463 "zcopy": false, 00:20:55.463 "get_zone_info": false, 00:20:55.463 "zone_management": false, 00:20:55.463 "zone_append": false, 00:20:55.463 "compare": true, 00:20:55.463 "compare_and_write": false, 00:20:55.463 "abort": true, 00:20:55.463 "seek_hole": false, 00:20:55.463 "seek_data": false, 00:20:55.463 "copy": true, 00:20:55.463 "nvme_iov_md": false 00:20:55.463 }, 00:20:55.463 "driver_specific": { 00:20:55.463 "nvme": [ 00:20:55.463 { 00:20:55.463 "pci_address": "0000:00:11.0", 00:20:55.463 "trid": { 00:20:55.463 "trtype": "PCIe", 00:20:55.463 "traddr": "0000:00:11.0" 00:20:55.463 }, 00:20:55.463 "ctrlr_data": { 00:20:55.463 "cntlid": 0, 00:20:55.463 "vendor_id": "0x1b36", 00:20:55.463 "model_number": "QEMU NVMe Ctrl", 00:20:55.463 "serial_number": "12341", 00:20:55.463 "firmware_revision": "8.0.0", 00:20:55.463 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:55.463 "oacs": { 00:20:55.463 "security": 0, 00:20:55.463 "format": 1, 00:20:55.463 "firmware": 0, 00:20:55.463 "ns_manage": 1 00:20:55.463 }, 00:20:55.463 "multi_ctrlr": false, 00:20:55.463 "ana_reporting": false 00:20:55.463 }, 00:20:55.463 "vs": { 00:20:55.463 "nvme_version": "1.4" 00:20:55.463 }, 00:20:55.463 "ns_data": { 00:20:55.463 "id": 1, 00:20:55.463 "can_share": false 00:20:55.463 } 00:20:55.463 } 00:20:55.463 ], 00:20:55.463 "mp_policy": "active_passive" 00:20:55.463 } 00:20:55.463 } 00:20:55.463 ]' 00:20:55.463 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:55.463 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:55.722 09:26:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:55.981 09:26:42 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e59cf01f-c076-4ace-b5c2-a23064d89771 00:20:55.981 09:26:42 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:55.981 09:26:42 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e59cf01f-c076-4ace-b5c2-a23064d89771 00:20:56.238 09:26:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:56.495 09:26:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=655e3de1-81c6-4cb5-a16e-8e878022002c 00:20:56.495 09:26:42 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 655e3de1-81c6-4cb5-a16e-8e878022002c 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:56.754 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:57.010 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:57.010 { 00:20:57.010 "name": "1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2", 00:20:57.010 "aliases": [ 00:20:57.010 "lvs/nvme0n1p0" 00:20:57.010 ], 00:20:57.010 "product_name": "Logical Volume", 00:20:57.010 "block_size": 4096, 00:20:57.010 "num_blocks": 26476544, 00:20:57.010 "uuid": "1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2", 00:20:57.010 "assigned_rate_limits": { 00:20:57.010 "rw_ios_per_sec": 0, 00:20:57.010 "rw_mbytes_per_sec": 0, 00:20:57.010 "r_mbytes_per_sec": 0, 00:20:57.010 "w_mbytes_per_sec": 0 00:20:57.010 }, 00:20:57.010 "claimed": false, 00:20:57.010 "zoned": false, 00:20:57.010 "supported_io_types": { 00:20:57.010 "read": true, 00:20:57.010 "write": true, 00:20:57.010 "unmap": true, 00:20:57.010 "flush": false, 00:20:57.010 "reset": true, 00:20:57.010 "nvme_admin": false, 00:20:57.010 "nvme_io": false, 00:20:57.010 "nvme_io_md": false, 00:20:57.010 "write_zeroes": true, 00:20:57.010 "zcopy": false, 00:20:57.010 "get_zone_info": false, 00:20:57.010 "zone_management": false, 00:20:57.010 "zone_append": false, 00:20:57.011 "compare": false, 00:20:57.011 "compare_and_write": false, 00:20:57.011 "abort": false, 00:20:57.011 "seek_hole": true, 00:20:57.011 "seek_data": true, 00:20:57.011 "copy": false, 00:20:57.011 "nvme_iov_md": false 00:20:57.011 }, 00:20:57.011 "driver_specific": { 00:20:57.011 "lvol": { 00:20:57.011 "lvol_store_uuid": "655e3de1-81c6-4cb5-a16e-8e878022002c", 00:20:57.011 "base_bdev": "nvme0n1", 00:20:57.011 "thin_provision": true, 00:20:57.011 "num_allocated_clusters": 0, 00:20:57.011 "snapshot": false, 00:20:57.011 "clone": false, 00:20:57.011 "esnap_clone": false 00:20:57.011 } 00:20:57.011 } 00:20:57.011 } 00:20:57.011 ]' 00:20:57.011 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:57.268 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:57.525 09:26:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:57.836 { 00:20:57.836 "name": "1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2", 00:20:57.836 "aliases": [ 00:20:57.836 "lvs/nvme0n1p0" 00:20:57.836 ], 00:20:57.836 "product_name": "Logical Volume", 00:20:57.836 "block_size": 4096, 00:20:57.836 "num_blocks": 26476544, 00:20:57.836 "uuid": "1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2", 00:20:57.836 "assigned_rate_limits": { 00:20:57.836 "rw_ios_per_sec": 0, 00:20:57.836 "rw_mbytes_per_sec": 0, 00:20:57.836 "r_mbytes_per_sec": 0, 00:20:57.836 "w_mbytes_per_sec": 0 00:20:57.836 }, 00:20:57.836 "claimed": false, 00:20:57.836 "zoned": false, 00:20:57.836 "supported_io_types": { 00:20:57.836 "read": true, 00:20:57.836 "write": true, 00:20:57.836 "unmap": true, 00:20:57.836 "flush": false, 00:20:57.836 "reset": true, 00:20:57.836 "nvme_admin": false, 00:20:57.836 "nvme_io": false, 00:20:57.836 "nvme_io_md": false, 00:20:57.836 "write_zeroes": true, 00:20:57.836 "zcopy": false, 00:20:57.836 "get_zone_info": false, 00:20:57.836 "zone_management": false, 00:20:57.836 "zone_append": false, 00:20:57.836 "compare": false, 00:20:57.836 "compare_and_write": false, 00:20:57.836 "abort": false, 00:20:57.836 "seek_hole": true, 00:20:57.836 "seek_data": true, 00:20:57.836 "copy": false, 00:20:57.836 "nvme_iov_md": false 00:20:57.836 }, 00:20:57.836 "driver_specific": { 00:20:57.836 "lvol": { 00:20:57.836 "lvol_store_uuid": "655e3de1-81c6-4cb5-a16e-8e878022002c", 00:20:57.836 "base_bdev": "nvme0n1", 00:20:57.836 "thin_provision": true, 00:20:57.836 "num_allocated_clusters": 0, 00:20:57.836 "snapshot": false, 00:20:57.836 "clone": false, 00:20:57.836 "esnap_clone": false 00:20:57.836 } 00:20:57.836 } 00:20:57.836 } 00:20:57.836 ]' 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:57.836 09:26:44 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:58.401 { 00:20:58.401 "name": "1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2", 00:20:58.401 "aliases": [ 00:20:58.401 "lvs/nvme0n1p0" 00:20:58.401 ], 00:20:58.401 "product_name": "Logical Volume", 00:20:58.401 "block_size": 4096, 00:20:58.401 "num_blocks": 26476544, 00:20:58.401 "uuid": "1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2", 00:20:58.401 "assigned_rate_limits": { 00:20:58.401 "rw_ios_per_sec": 0, 00:20:58.401 "rw_mbytes_per_sec": 0, 00:20:58.401 "r_mbytes_per_sec": 0, 00:20:58.401 "w_mbytes_per_sec": 0 00:20:58.401 }, 00:20:58.401 "claimed": false, 00:20:58.401 "zoned": false, 00:20:58.401 "supported_io_types": { 00:20:58.401 "read": true, 00:20:58.401 "write": true, 00:20:58.401 "unmap": true, 00:20:58.401 "flush": false, 00:20:58.401 "reset": true, 00:20:58.401 "nvme_admin": false, 00:20:58.401 "nvme_io": false, 00:20:58.401 "nvme_io_md": false, 00:20:58.401 "write_zeroes": true, 00:20:58.401 "zcopy": false, 00:20:58.401 "get_zone_info": false, 00:20:58.401 "zone_management": false, 00:20:58.401 "zone_append": false, 00:20:58.401 "compare": false, 00:20:58.401 "compare_and_write": false, 00:20:58.401 "abort": false, 00:20:58.401 "seek_hole": true, 00:20:58.401 "seek_data": true, 00:20:58.401 "copy": false, 00:20:58.401 "nvme_iov_md": false 00:20:58.401 }, 00:20:58.401 "driver_specific": { 00:20:58.401 "lvol": { 00:20:58.401 "lvol_store_uuid": "655e3de1-81c6-4cb5-a16e-8e878022002c", 00:20:58.401 "base_bdev": "nvme0n1", 00:20:58.401 "thin_provision": true, 00:20:58.401 "num_allocated_clusters": 0, 00:20:58.401 "snapshot": false, 00:20:58.401 "clone": false, 00:20:58.401 "esnap_clone": false 00:20:58.401 } 00:20:58.401 } 00:20:58.401 } 00:20:58.401 ]' 00:20:58.401 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:20:58.658 09:26:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1e25a4e4-5a4a-4d1d-920c-6612cabcc8b2 -c nvc0n1p0 --l2p_dram_limit 20 00:20:58.918 [2024-07-12 09:26:45.075673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.075749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.918 [2024-07-12 09:26:45.075795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.918 [2024-07-12 09:26:45.075831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.075981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.076011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.918 [2024-07-12 09:26:45.076036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:58.918 [2024-07-12 09:26:45.076077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.076165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.918 [2024-07-12 09:26:45.077660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.918 [2024-07-12 09:26:45.077733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.077765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.918 [2024-07-12 09:26:45.077794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:20:58.918 [2024-07-12 09:26:45.077816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.077963] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49058b75-4d81-4a4e-84cc-64415c03d5fc 00:20:58.918 [2024-07-12 09:26:45.079172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.079243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:58.918 [2024-07-12 09:26:45.079274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:58.918 [2024-07-12 09:26:45.079306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.084073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.084147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.918 [2024-07-12 09:26:45.084175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.629 ms 00:20:58.918 [2024-07-12 09:26:45.084228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.084391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.084439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.918 [2024-07-12 09:26:45.084475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:58.918 [2024-07-12 09:26:45.084504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.084617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.084658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.918 [2024-07-12 09:26:45.084681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:58.918 [2024-07-12 09:26:45.084704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.084754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.918 [2024-07-12 09:26:45.089535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.089620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.918 [2024-07-12 09:26:45.089659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:20:58.918 [2024-07-12 09:26:45.089681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.089753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.089785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.918 [2024-07-12 09:26:45.089814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:58.918 [2024-07-12 09:26:45.089837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.089924] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:58.918 [2024-07-12 09:26:45.090154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.918 [2024-07-12 09:26:45.090225] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.918 [2024-07-12 09:26:45.090254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:58.918 [2024-07-12 09:26:45.090284] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.918 [2024-07-12 09:26:45.090320] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.918 [2024-07-12 09:26:45.090346] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.918 [2024-07-12 09:26:45.090365] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.918 [2024-07-12 09:26:45.090390] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.918 [2024-07-12 09:26:45.090409] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.918 [2024-07-12 09:26:45.090434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.090455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.918 [2024-07-12 09:26:45.090479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:20:58.918 [2024-07-12 09:26:45.090503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.090630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.918 [2024-07-12 09:26:45.090657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.918 [2024-07-12 09:26:45.090682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:58.918 [2024-07-12 09:26:45.090703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.918 [2024-07-12 09:26:45.090848] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.918 [2024-07-12 09:26:45.090877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.918 [2024-07-12 09:26:45.090902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.918 [2024-07-12 09:26:45.090923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.918 [2024-07-12 09:26:45.090951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.918 [2024-07-12 09:26:45.090970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.918 [2024-07-12 09:26:45.090993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.918 [2024-07-12 09:26:45.091012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.918 [2024-07-12 09:26:45.091033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.918 [2024-07-12 09:26:45.091053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.918 [2024-07-12 09:26:45.091075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.918 [2024-07-12 09:26:45.091094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.918 [2024-07-12 09:26:45.091116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.918 [2024-07-12 09:26:45.091135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.918 [2024-07-12 09:26:45.091160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:58.918 [2024-07-12 09:26:45.091179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.918 [2024-07-12 09:26:45.091223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.918 [2024-07-12 09:26:45.091245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:58.918 [2024-07-12 09:26:45.091284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.918 [2024-07-12 09:26:45.091304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.918 [2024-07-12 09:26:45.091326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.919 [2024-07-12 09:26:45.091364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.919 [2024-07-12 09:26:45.091383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.919 [2024-07-12 09:26:45.091441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.919 [2024-07-12 09:26:45.091464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.919 [2024-07-12 09:26:45.091508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.919 [2024-07-12 09:26:45.091528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.919 [2024-07-12 09:26:45.091569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.919 [2024-07-12 09:26:45.091594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.919 [2024-07-12 09:26:45.091635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.919 [2024-07-12 09:26:45.091654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:58.919 [2024-07-12 09:26:45.091676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.919 [2024-07-12 09:26:45.091696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.919 [2024-07-12 09:26:45.091736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:58.919 [2024-07-12 09:26:45.091757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.919 [2024-07-12 09:26:45.091797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:58.919 [2024-07-12 09:26:45.091819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091836] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.919 [2024-07-12 09:26:45.091859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.919 [2024-07-12 09:26:45.091880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.919 [2024-07-12 09:26:45.091902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.919 [2024-07-12 09:26:45.091922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.919 [2024-07-12 09:26:45.091948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.919 [2024-07-12 09:26:45.091968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.919 [2024-07-12 09:26:45.091990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.919 [2024-07-12 09:26:45.092009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.919 [2024-07-12 09:26:45.092032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.919 [2024-07-12 09:26:45.092059] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.919 [2024-07-12 09:26:45.092088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.919 [2024-07-12 09:26:45.092134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:58.919 [2024-07-12 09:26:45.092155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:58.919 [2024-07-12 09:26:45.092179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:58.919 [2024-07-12 09:26:45.092217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:58.919 [2024-07-12 09:26:45.092262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:58.919 [2024-07-12 09:26:45.092284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:58.919 [2024-07-12 09:26:45.092307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:58.919 [2024-07-12 09:26:45.092328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:58.919 [2024-07-12 09:26:45.092359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:58.919 [2024-07-12 09:26:45.092469] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.919 [2024-07-12 09:26:45.092496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.919 [2024-07-12 09:26:45.092540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.919 [2024-07-12 09:26:45.092562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.919 [2024-07-12 09:26:45.092586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.919 [2024-07-12 09:26:45.092609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.919 [2024-07-12 09:26:45.092635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.919 [2024-07-12 09:26:45.092662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:20:58.919 [2024-07-12 09:26:45.092688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.919 [2024-07-12 09:26:45.092760] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:58.919 [2024-07-12 09:26:45.092799] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:00.818 [2024-07-12 09:26:47.141985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.818 [2024-07-12 09:26:47.142064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:00.818 [2024-07-12 09:26:47.142089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2049.235 ms 00:21:00.818 [2024-07-12 09:26:47.142111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.191750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.191821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.077 [2024-07-12 09:26:47.191852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.299 ms 00:21:01.077 [2024-07-12 09:26:47.191872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.192095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.192126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.077 [2024-07-12 09:26:47.192144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:01.077 [2024-07-12 09:26:47.192164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.239569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.239650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.077 [2024-07-12 09:26:47.239674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.314 ms 00:21:01.077 [2024-07-12 09:26:47.239691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.239766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.239797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.077 [2024-07-12 09:26:47.239814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:01.077 [2024-07-12 09:26:47.239830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.240282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.240316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.077 [2024-07-12 09:26:47.240332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:21:01.077 [2024-07-12 09:26:47.240348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.240519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.240546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.077 [2024-07-12 09:26:47.240561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:01.077 [2024-07-12 09:26:47.240579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.260326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.260395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.077 [2024-07-12 09:26:47.260417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.714 ms 00:21:01.077 [2024-07-12 09:26:47.260435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.276937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:01.077 [2024-07-12 09:26:47.282937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.282995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.077 [2024-07-12 09:26:47.283022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.352 ms 00:21:01.077 [2024-07-12 09:26:47.283037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.077 [2024-07-12 09:26:47.340937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.077 [2024-07-12 09:26:47.341031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:01.077 [2024-07-12 09:26:47.341054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.833 ms 00:21:01.077 [2024-07-12 09:26:47.341067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.078 [2024-07-12 09:26:47.341370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.078 [2024-07-12 09:26:47.341392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.078 [2024-07-12 09:26:47.341410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:21:01.078 [2024-07-12 09:26:47.341421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.078 [2024-07-12 09:26:47.372090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.078 [2024-07-12 09:26:47.372150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:01.078 [2024-07-12 09:26:47.372170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.581 ms 00:21:01.078 [2024-07-12 09:26:47.372183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.078 [2024-07-12 09:26:47.402037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.078 [2024-07-12 09:26:47.402106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:01.078 [2024-07-12 09:26:47.402128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.769 ms 00:21:01.078 [2024-07-12 09:26:47.402139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.078 [2024-07-12 09:26:47.402881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.078 [2024-07-12 09:26:47.402941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:01.078 [2024-07-12 09:26:47.402958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:21:01.078 [2024-07-12 09:26:47.402971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.336 [2024-07-12 09:26:47.491351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-07-12 09:26:47.491456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:01.337 [2024-07-12 09:26:47.491483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.313 ms 00:21:01.337 [2024-07-12 09:26:47.491497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-07-12 09:26:47.523896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-07-12 09:26:47.523962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:01.337 [2024-07-12 09:26:47.523984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.341 ms 00:21:01.337 [2024-07-12 09:26:47.523995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-07-12 09:26:47.553870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-07-12 09:26:47.553942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:01.337 [2024-07-12 09:26:47.553963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.817 ms 00:21:01.337 [2024-07-12 09:26:47.553974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-07-12 09:26:47.585751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-07-12 09:26:47.585817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.337 [2024-07-12 09:26:47.585839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.715 ms 00:21:01.337 [2024-07-12 09:26:47.585851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-07-12 09:26:47.585938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-07-12 09:26:47.585957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.337 [2024-07-12 09:26:47.585992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:01.337 [2024-07-12 09:26:47.586004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-07-12 09:26:47.586148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.337 [2024-07-12 09:26:47.586170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.337 [2024-07-12 09:26:47.586186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:01.337 [2024-07-12 09:26:47.586213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.337 [2024-07-12 09:26:47.587323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2511.137 ms, result 0 00:21:01.337 { 00:21:01.337 "name": "ftl0", 00:21:01.337 "uuid": "49058b75-4d81-4a4e-84cc-64415c03d5fc" 00:21:01.337 } 00:21:01.337 09:26:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:01.337 09:26:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:21:01.337 09:26:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:21:01.596 09:26:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:01.854 [2024-07-12 09:26:48.047787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:01.854 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:01.854 Zero copy mechanism will not be used. 00:21:01.854 Running I/O for 4 seconds... 00:21:06.040 00:21:06.040 Latency(us) 00:21:06.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.040 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:06.040 ftl0 : 4.00 1908.92 126.76 0.00 0.00 549.41 216.90 1705.43 00:21:06.040 =================================================================================================================== 00:21:06.040 Total : 1908.92 126.76 0.00 0.00 549.41 216.90 1705.43 00:21:06.040 0 00:21:06.040 [2024-07-12 09:26:52.058292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:06.040 09:26:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:06.040 [2024-07-12 09:26:52.187722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:06.040 Running I/O for 4 seconds... 00:21:10.233 00:21:10.233 Latency(us) 00:21:10.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.233 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:10.233 ftl0 : 4.02 7408.41 28.94 0.00 0.00 17223.63 323.96 30742.34 00:21:10.233 =================================================================================================================== 00:21:10.233 Total : 7408.41 28.94 0.00 0.00 17223.63 0.00 30742.34 00:21:10.233 0 00:21:10.233 [2024-07-12 09:26:56.219631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:10.233 09:26:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:10.233 [2024-07-12 09:26:56.352078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:10.233 Running I/O for 4 seconds... 00:21:14.426 00:21:14.426 Latency(us) 00:21:14.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.426 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.426 Verification LBA range: start 0x0 length 0x1400000 00:21:14.426 ftl0 : 4.01 6099.95 23.83 0.00 0.00 20913.91 370.50 31933.91 00:21:14.426 =================================================================================================================== 00:21:14.426 Total : 6099.95 23.83 0.00 0.00 20913.91 0.00 31933.91 00:21:14.426 [2024-07-12 09:27:00.380814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:14.426 0 00:21:14.426 09:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:14.426 [2024-07-12 09:27:00.666227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.426 [2024-07-12 09:27:00.666481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.426 [2024-07-12 09:27:00.666619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:14.426 [2024-07-12 09:27:00.666674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.426 [2024-07-12 09:27:00.666837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.426 [2024-07-12 09:27:00.670245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.426 [2024-07-12 09:27:00.670415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.426 [2024-07-12 09:27:00.670538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:21:14.426 [2024-07-12 09:27:00.670680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.426 [2024-07-12 09:27:00.672117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.426 [2024-07-12 09:27:00.672301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.426 [2024-07-12 09:27:00.672440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:21:14.426 [2024-07-12 09:27:00.672613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.852614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.852877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.685 [2024-07-12 09:27:00.853015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 179.923 ms 00:21:14.685 [2024-07-12 09:27:00.853203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.859959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.860111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.685 [2024-07-12 09:27:00.860249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.656 ms 00:21:14.685 [2024-07-12 09:27:00.860426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.892483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.892651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.685 [2024-07-12 09:27:00.892777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.932 ms 00:21:14.685 [2024-07-12 09:27:00.892834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.912307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.912577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.685 [2024-07-12 09:27:00.912719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.282 ms 00:21:14.685 [2024-07-12 09:27:00.912782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.913111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.913282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.685 [2024-07-12 09:27:00.913415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:21:14.685 [2024-07-12 09:27:00.913478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.945382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.945565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:14.685 [2024-07-12 09:27:00.945688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.755 ms 00:21:14.685 [2024-07-12 09:27:00.945744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:00.977505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:00.977705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:14.685 [2024-07-12 09:27:00.977735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.683 ms 00:21:14.685 [2024-07-12 09:27:00.977751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.685 [2024-07-12 09:27:01.009493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.685 [2024-07-12 09:27:01.009565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.685 [2024-07-12 09:27:01.009597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.690 ms 00:21:14.685 [2024-07-12 09:27:01.009612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.943 [2024-07-12 09:27:01.041414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.943 [2024-07-12 09:27:01.041496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.943 [2024-07-12 09:27:01.041517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.641 ms 00:21:14.943 [2024-07-12 09:27:01.041534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.943 [2024-07-12 09:27:01.041602] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.943 [2024-07-12 09:27:01.041631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.041989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.042003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.042015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.943 [2024-07-12 09:27:01.042031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.042987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.043002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.043014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.944 [2024-07-12 09:27:01.043038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.944 [2024-07-12 09:27:01.043050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49058b75-4d81-4a4e-84cc-64415c03d5fc 00:21:14.944 [2024-07-12 09:27:01.043064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:14.944 [2024-07-12 09:27:01.043075] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:14.944 [2024-07-12 09:27:01.043088] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:14.944 [2024-07-12 09:27:01.043100] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:14.944 [2024-07-12 09:27:01.043116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.944 [2024-07-12 09:27:01.043128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.944 [2024-07-12 09:27:01.043141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.944 [2024-07-12 09:27:01.043151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.944 [2024-07-12 09:27:01.043165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.944 [2024-07-12 09:27:01.043177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.944 [2024-07-12 09:27:01.043206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.944 [2024-07-12 09:27:01.043220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:21:14.944 [2024-07-12 09:27:01.043233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.062113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.944 [2024-07-12 09:27:01.062158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.944 [2024-07-12 09:27:01.062178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.797 ms 00:21:14.944 [2024-07-12 09:27:01.062215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.062736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.944 [2024-07-12 09:27:01.062769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.944 [2024-07-12 09:27:01.062784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:21:14.944 [2024-07-12 09:27:01.062798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.106020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.106093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.944 [2024-07-12 09:27:01.106114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.106130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.106232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.106255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.944 [2024-07-12 09:27:01.106268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.106281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.106401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.106425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.944 [2024-07-12 09:27:01.106439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.106454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.106479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.106496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.944 [2024-07-12 09:27:01.106508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.106521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.206606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.206685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.944 [2024-07-12 09:27:01.206709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.206733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.290681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.290747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.944 [2024-07-12 09:27:01.290767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.290781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.290893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.290918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.944 [2024-07-12 09:27:01.290931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.290944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.291007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.291027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.944 [2024-07-12 09:27:01.291041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.291054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.291172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.291221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.944 [2024-07-12 09:27:01.291238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.291254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.291308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.291330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.944 [2024-07-12 09:27:01.291343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.291356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.291402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.291434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.944 [2024-07-12 09:27:01.291448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.291461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.291520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.944 [2024-07-12 09:27:01.291541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.944 [2024-07-12 09:27:01.291553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.944 [2024-07-12 09:27:01.291566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.944 [2024-07-12 09:27:01.291726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 625.476 ms, result 0 00:21:15.202 true 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80422 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80422 ']' 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80422 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80422 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80422' 00:21:15.202 killing process with pid 80422 00:21:15.202 Received shutdown signal, test time was about 4.000000 seconds 00:21:15.202 00:21:15.202 Latency(us) 00:21:15.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.202 =================================================================================================================== 00:21:15.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80422 00:21:15.202 09:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80422 00:21:19.416 09:27:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:21:19.416 09:27:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:19.416 09:27:04 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.416 09:27:04 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:19.416 Remove shared memory files 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:19.416 ************************************ 00:21:19.416 END TEST ftl_bdevperf 00:21:19.416 ************************************ 00:21:19.416 00:21:19.416 real 0m25.040s 00:21:19.416 user 0m28.907s 00:21:19.416 sys 0m1.140s 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:19.416 09:27:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:19.416 09:27:05 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:19.416 09:27:05 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:19.416 09:27:05 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:19.416 09:27:05 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.416 09:27:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:19.416 ************************************ 00:21:19.416 START TEST ftl_trim 00:21:19.416 ************************************ 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:19.416 * Looking for test storage... 00:21:19.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80781 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80781 00:21:19.416 09:27:05 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80781 ']' 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.416 09:27:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:19.416 [2024-07-12 09:27:05.300660] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:19.416 [2024-07-12 09:27:05.301047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80781 ] 00:21:19.416 [2024-07-12 09:27:05.474356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:19.416 [2024-07-12 09:27:05.698310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.416 [2024-07-12 09:27:05.698453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.416 [2024-07-12 09:27:05.698463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.352 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.352 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:20.352 09:27:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:20.352 09:27:06 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:20.352 09:27:06 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:20.352 09:27:06 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:20.352 09:27:06 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:20.352 09:27:06 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:20.609 09:27:06 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:20.609 09:27:06 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:20.609 09:27:06 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:20.609 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:20.609 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:20.609 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:20.609 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:20.609 09:27:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:20.867 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:20.867 { 00:21:20.867 "name": "nvme0n1", 00:21:20.867 "aliases": [ 00:21:20.867 "7ea8bd34-a358-48b8-b89e-4fda67d9c6c7" 00:21:20.867 ], 00:21:20.867 "product_name": "NVMe disk", 00:21:20.867 "block_size": 4096, 00:21:20.867 "num_blocks": 1310720, 00:21:20.867 "uuid": "7ea8bd34-a358-48b8-b89e-4fda67d9c6c7", 00:21:20.867 "assigned_rate_limits": { 00:21:20.867 "rw_ios_per_sec": 0, 00:21:20.867 "rw_mbytes_per_sec": 0, 00:21:20.867 "r_mbytes_per_sec": 0, 00:21:20.867 "w_mbytes_per_sec": 0 00:21:20.867 }, 00:21:20.867 "claimed": true, 00:21:20.867 "claim_type": "read_many_write_one", 00:21:20.867 "zoned": false, 00:21:20.867 "supported_io_types": { 00:21:20.867 "read": true, 00:21:20.867 "write": true, 00:21:20.867 "unmap": true, 00:21:20.867 "flush": true, 00:21:20.867 "reset": true, 00:21:20.867 "nvme_admin": true, 00:21:20.867 "nvme_io": true, 00:21:20.867 "nvme_io_md": false, 00:21:20.867 "write_zeroes": true, 00:21:20.867 "zcopy": false, 00:21:20.867 "get_zone_info": false, 00:21:20.867 "zone_management": false, 00:21:20.867 "zone_append": false, 00:21:20.867 "compare": true, 00:21:20.867 "compare_and_write": false, 00:21:20.867 "abort": true, 00:21:20.867 "seek_hole": false, 00:21:20.867 "seek_data": false, 00:21:20.867 "copy": true, 00:21:20.867 "nvme_iov_md": false 00:21:20.867 }, 00:21:20.867 "driver_specific": { 00:21:20.867 "nvme": [ 00:21:20.867 { 00:21:20.867 "pci_address": "0000:00:11.0", 00:21:20.868 "trid": { 00:21:20.868 "trtype": "PCIe", 00:21:20.868 "traddr": "0000:00:11.0" 00:21:20.868 }, 00:21:20.868 "ctrlr_data": { 00:21:20.868 "cntlid": 0, 00:21:20.868 "vendor_id": "0x1b36", 00:21:20.868 "model_number": "QEMU NVMe Ctrl", 00:21:20.868 "serial_number": "12341", 00:21:20.868 "firmware_revision": "8.0.0", 00:21:20.868 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:20.868 "oacs": { 00:21:20.868 "security": 0, 00:21:20.868 "format": 1, 00:21:20.868 "firmware": 0, 00:21:20.868 "ns_manage": 1 00:21:20.868 }, 00:21:20.868 "multi_ctrlr": false, 00:21:20.868 "ana_reporting": false 00:21:20.868 }, 00:21:20.868 "vs": { 00:21:20.868 "nvme_version": "1.4" 00:21:20.868 }, 00:21:20.868 "ns_data": { 00:21:20.868 "id": 1, 00:21:20.868 "can_share": false 00:21:20.868 } 00:21:20.868 } 00:21:20.868 ], 00:21:20.868 "mp_policy": "active_passive" 00:21:20.868 } 00:21:20.868 } 00:21:20.868 ]' 00:21:20.868 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:20.868 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:20.868 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:20.868 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:20.868 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:20.868 09:27:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:21:20.868 09:27:07 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:20.868 09:27:07 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:20.868 09:27:07 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:20.868 09:27:07 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:20.868 09:27:07 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:21.126 09:27:07 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=655e3de1-81c6-4cb5-a16e-8e878022002c 00:21:21.126 09:27:07 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:21.126 09:27:07 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 655e3de1-81c6-4cb5-a16e-8e878022002c 00:21:21.384 09:27:07 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:21.642 09:27:07 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=02e87b22-9595-46a2-8a33-751f51830a98 00:21:21.642 09:27:07 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 02e87b22-9595-46a2-8a33-751f51830a98 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:21.900 09:27:08 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:21.900 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:21.900 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:21.900 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:21.900 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:21.900 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:22.158 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:22.158 { 00:21:22.158 "name": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:22.158 "aliases": [ 00:21:22.158 "lvs/nvme0n1p0" 00:21:22.158 ], 00:21:22.158 "product_name": "Logical Volume", 00:21:22.158 "block_size": 4096, 00:21:22.158 "num_blocks": 26476544, 00:21:22.158 "uuid": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:22.158 "assigned_rate_limits": { 00:21:22.158 "rw_ios_per_sec": 0, 00:21:22.158 "rw_mbytes_per_sec": 0, 00:21:22.158 "r_mbytes_per_sec": 0, 00:21:22.158 "w_mbytes_per_sec": 0 00:21:22.158 }, 00:21:22.158 "claimed": false, 00:21:22.158 "zoned": false, 00:21:22.158 "supported_io_types": { 00:21:22.158 "read": true, 00:21:22.158 "write": true, 00:21:22.158 "unmap": true, 00:21:22.158 "flush": false, 00:21:22.158 "reset": true, 00:21:22.158 "nvme_admin": false, 00:21:22.158 "nvme_io": false, 00:21:22.158 "nvme_io_md": false, 00:21:22.158 "write_zeroes": true, 00:21:22.158 "zcopy": false, 00:21:22.158 "get_zone_info": false, 00:21:22.158 "zone_management": false, 00:21:22.158 "zone_append": false, 00:21:22.158 "compare": false, 00:21:22.158 "compare_and_write": false, 00:21:22.158 "abort": false, 00:21:22.158 "seek_hole": true, 00:21:22.158 "seek_data": true, 00:21:22.158 "copy": false, 00:21:22.158 "nvme_iov_md": false 00:21:22.158 }, 00:21:22.158 "driver_specific": { 00:21:22.158 "lvol": { 00:21:22.158 "lvol_store_uuid": "02e87b22-9595-46a2-8a33-751f51830a98", 00:21:22.158 "base_bdev": "nvme0n1", 00:21:22.158 "thin_provision": true, 00:21:22.158 "num_allocated_clusters": 0, 00:21:22.158 "snapshot": false, 00:21:22.158 "clone": false, 00:21:22.158 "esnap_clone": false 00:21:22.158 } 00:21:22.158 } 00:21:22.158 } 00:21:22.158 ]' 00:21:22.158 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:22.158 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:22.416 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:22.416 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:22.416 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:22.416 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:22.416 09:27:08 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:22.416 09:27:08 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:22.416 09:27:08 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:22.675 09:27:08 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:22.675 09:27:08 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:22.675 09:27:08 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:22.675 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:22.675 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:22.675 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:22.675 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:22.675 09:27:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:22.934 { 00:21:22.934 "name": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:22.934 "aliases": [ 00:21:22.934 "lvs/nvme0n1p0" 00:21:22.934 ], 00:21:22.934 "product_name": "Logical Volume", 00:21:22.934 "block_size": 4096, 00:21:22.934 "num_blocks": 26476544, 00:21:22.934 "uuid": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:22.934 "assigned_rate_limits": { 00:21:22.934 "rw_ios_per_sec": 0, 00:21:22.934 "rw_mbytes_per_sec": 0, 00:21:22.934 "r_mbytes_per_sec": 0, 00:21:22.934 "w_mbytes_per_sec": 0 00:21:22.934 }, 00:21:22.934 "claimed": false, 00:21:22.934 "zoned": false, 00:21:22.934 "supported_io_types": { 00:21:22.934 "read": true, 00:21:22.934 "write": true, 00:21:22.934 "unmap": true, 00:21:22.934 "flush": false, 00:21:22.934 "reset": true, 00:21:22.934 "nvme_admin": false, 00:21:22.934 "nvme_io": false, 00:21:22.934 "nvme_io_md": false, 00:21:22.934 "write_zeroes": true, 00:21:22.934 "zcopy": false, 00:21:22.934 "get_zone_info": false, 00:21:22.934 "zone_management": false, 00:21:22.934 "zone_append": false, 00:21:22.934 "compare": false, 00:21:22.934 "compare_and_write": false, 00:21:22.934 "abort": false, 00:21:22.934 "seek_hole": true, 00:21:22.934 "seek_data": true, 00:21:22.934 "copy": false, 00:21:22.934 "nvme_iov_md": false 00:21:22.934 }, 00:21:22.934 "driver_specific": { 00:21:22.934 "lvol": { 00:21:22.934 "lvol_store_uuid": "02e87b22-9595-46a2-8a33-751f51830a98", 00:21:22.934 "base_bdev": "nvme0n1", 00:21:22.934 "thin_provision": true, 00:21:22.934 "num_allocated_clusters": 0, 00:21:22.934 "snapshot": false, 00:21:22.934 "clone": false, 00:21:22.934 "esnap_clone": false 00:21:22.934 } 00:21:22.934 } 00:21:22.934 } 00:21:22.934 ]' 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:22.934 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:22.934 09:27:09 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:22.934 09:27:09 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:23.192 09:27:09 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:23.193 09:27:09 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:23.193 09:27:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:23.193 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:23.193 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:23.193 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:21:23.193 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:21:23.193 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ba89b60-1547-49bb-ba6e-71c49b473d47 00:21:23.452 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:23.452 { 00:21:23.452 "name": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:23.452 "aliases": [ 00:21:23.452 "lvs/nvme0n1p0" 00:21:23.452 ], 00:21:23.452 "product_name": "Logical Volume", 00:21:23.452 "block_size": 4096, 00:21:23.452 "num_blocks": 26476544, 00:21:23.452 "uuid": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:23.452 "assigned_rate_limits": { 00:21:23.452 "rw_ios_per_sec": 0, 00:21:23.452 "rw_mbytes_per_sec": 0, 00:21:23.452 "r_mbytes_per_sec": 0, 00:21:23.452 "w_mbytes_per_sec": 0 00:21:23.452 }, 00:21:23.452 "claimed": false, 00:21:23.452 "zoned": false, 00:21:23.452 "supported_io_types": { 00:21:23.452 "read": true, 00:21:23.452 "write": true, 00:21:23.452 "unmap": true, 00:21:23.452 "flush": false, 00:21:23.452 "reset": true, 00:21:23.452 "nvme_admin": false, 00:21:23.452 "nvme_io": false, 00:21:23.452 "nvme_io_md": false, 00:21:23.452 "write_zeroes": true, 00:21:23.452 "zcopy": false, 00:21:23.452 "get_zone_info": false, 00:21:23.452 "zone_management": false, 00:21:23.452 "zone_append": false, 00:21:23.452 "compare": false, 00:21:23.452 "compare_and_write": false, 00:21:23.452 "abort": false, 00:21:23.452 "seek_hole": true, 00:21:23.452 "seek_data": true, 00:21:23.452 "copy": false, 00:21:23.452 "nvme_iov_md": false 00:21:23.452 }, 00:21:23.452 "driver_specific": { 00:21:23.452 "lvol": { 00:21:23.452 "lvol_store_uuid": "02e87b22-9595-46a2-8a33-751f51830a98", 00:21:23.452 "base_bdev": "nvme0n1", 00:21:23.452 "thin_provision": true, 00:21:23.452 "num_allocated_clusters": 0, 00:21:23.452 "snapshot": false, 00:21:23.452 "clone": false, 00:21:23.452 "esnap_clone": false 00:21:23.452 } 00:21:23.452 } 00:21:23.452 } 00:21:23.452 ]' 00:21:23.452 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:23.452 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:21:23.452 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:23.710 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:23.710 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:23.710 09:27:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:21:23.710 09:27:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:23.710 09:27:09 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2ba89b60-1547-49bb-ba6e-71c49b473d47 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:23.970 [2024-07-12 09:27:10.119944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.120011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.970 [2024-07-12 09:27:10.120033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:23.970 [2024-07-12 09:27:10.120049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.123383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.123450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.970 [2024-07-12 09:27:10.123469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:21:23.970 [2024-07-12 09:27:10.123483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.123692] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.970 [2024-07-12 09:27:10.124645] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.970 [2024-07-12 09:27:10.124686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.124707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.970 [2024-07-12 09:27:10.124720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:21:23.970 [2024-07-12 09:27:10.124733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.124926] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:21:23.970 [2024-07-12 09:27:10.126005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.126047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:23.970 [2024-07-12 09:27:10.126067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:23.970 [2024-07-12 09:27:10.126079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.130860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.130915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.970 [2024-07-12 09:27:10.130935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:21:23.970 [2024-07-12 09:27:10.130948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.131138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.131161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.970 [2024-07-12 09:27:10.131178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:23.970 [2024-07-12 09:27:10.131215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.131281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.131296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.970 [2024-07-12 09:27:10.131315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:23.970 [2024-07-12 09:27:10.131326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.131373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:23.970 [2024-07-12 09:27:10.135940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.135987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.970 [2024-07-12 09:27:10.136004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 00:21:23.970 [2024-07-12 09:27:10.136017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.136131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.136155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.970 [2024-07-12 09:27:10.136168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:23.970 [2024-07-12 09:27:10.136202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.136248] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:23.970 [2024-07-12 09:27:10.136411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.970 [2024-07-12 09:27:10.136430] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.970 [2024-07-12 09:27:10.136450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:23.970 [2024-07-12 09:27:10.136465] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.970 [2024-07-12 09:27:10.136480] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.970 [2024-07-12 09:27:10.136493] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:23.970 [2024-07-12 09:27:10.136507] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.970 [2024-07-12 09:27:10.136522] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.970 [2024-07-12 09:27:10.136559] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.970 [2024-07-12 09:27:10.136572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.136586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.970 [2024-07-12 09:27:10.136599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:21:23.970 [2024-07-12 09:27:10.136612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.136718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.970 [2024-07-12 09:27:10.136737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.970 [2024-07-12 09:27:10.136749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:23.970 [2024-07-12 09:27:10.136763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.970 [2024-07-12 09:27:10.136902] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.971 [2024-07-12 09:27:10.136923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.971 [2024-07-12 09:27:10.136936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.971 [2024-07-12 09:27:10.136950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.136962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.971 [2024-07-12 09:27:10.136974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.136985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:23.971 [2024-07-12 09:27:10.136998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.971 [2024-07-12 09:27:10.137009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.971 [2024-07-12 09:27:10.137032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.971 [2024-07-12 09:27:10.137044] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:23.971 [2024-07-12 09:27:10.137054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.971 [2024-07-12 09:27:10.137069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.971 [2024-07-12 09:27:10.137081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:23.971 [2024-07-12 09:27:10.137093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.971 [2024-07-12 09:27:10.137118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.971 [2024-07-12 09:27:10.137154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.971 [2024-07-12 09:27:10.137207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.971 [2024-07-12 09:27:10.137242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.971 [2024-07-12 09:27:10.137277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.971 [2024-07-12 09:27:10.137310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.971 [2024-07-12 09:27:10.137335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.971 [2024-07-12 09:27:10.137348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:23.971 [2024-07-12 09:27:10.137358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.971 [2024-07-12 09:27:10.137370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.971 [2024-07-12 09:27:10.137381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:23.971 [2024-07-12 09:27:10.137395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.971 [2024-07-12 09:27:10.137418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:23.971 [2024-07-12 09:27:10.137429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137441] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.971 [2024-07-12 09:27:10.137452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.971 [2024-07-12 09:27:10.137466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.971 [2024-07-12 09:27:10.137490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.971 [2024-07-12 09:27:10.137501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.971 [2024-07-12 09:27:10.137516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.971 [2024-07-12 09:27:10.137527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.971 [2024-07-12 09:27:10.137540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.971 [2024-07-12 09:27:10.137551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.971 [2024-07-12 09:27:10.137568] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.971 [2024-07-12 09:27:10.137595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:23.971 [2024-07-12 09:27:10.137623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:23.971 [2024-07-12 09:27:10.137638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:23.971 [2024-07-12 09:27:10.137650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:23.971 [2024-07-12 09:27:10.137664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:23.971 [2024-07-12 09:27:10.137675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:23.971 [2024-07-12 09:27:10.137689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:23.971 [2024-07-12 09:27:10.137701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:23.971 [2024-07-12 09:27:10.137715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:23.971 [2024-07-12 09:27:10.137727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:23.971 [2024-07-12 09:27:10.137792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.971 [2024-07-12 09:27:10.137805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.971 [2024-07-12 09:27:10.137832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.971 [2024-07-12 09:27:10.137845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.971 [2024-07-12 09:27:10.137857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.971 [2024-07-12 09:27:10.137871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.971 [2024-07-12 09:27:10.137884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.971 [2024-07-12 09:27:10.137897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:21:23.971 [2024-07-12 09:27:10.137909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.971 [2024-07-12 09:27:10.138013] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:23.971 [2024-07-12 09:27:10.138036] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:25.876 [2024-07-12 09:27:12.143105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.876 [2024-07-12 09:27:12.143182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:25.876 [2024-07-12 09:27:12.143224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2005.093 ms 00:21:25.876 [2024-07-12 09:27:12.143252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.876 [2024-07-12 09:27:12.176013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.876 [2024-07-12 09:27:12.176081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.876 [2024-07-12 09:27:12.176106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.339 ms 00:21:25.876 [2024-07-12 09:27:12.176119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.876 [2024-07-12 09:27:12.176360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.876 [2024-07-12 09:27:12.176383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:25.876 [2024-07-12 09:27:12.176402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:25.876 [2024-07-12 09:27:12.176414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.876 [2024-07-12 09:27:12.226548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.876 [2024-07-12 09:27:12.226623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.876 [2024-07-12 09:27:12.226648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.086 ms 00:21:25.876 [2024-07-12 09:27:12.226660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.876 [2024-07-12 09:27:12.226808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.876 [2024-07-12 09:27:12.226828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.876 [2024-07-12 09:27:12.226845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:25.876 [2024-07-12 09:27:12.226857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.876 [2024-07-12 09:27:12.227256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.876 [2024-07-12 09:27:12.227277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.876 [2024-07-12 09:27:12.227292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:21:25.876 [2024-07-12 09:27:12.227303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.227475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.227492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:26.135 [2024-07-12 09:27:12.227524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:26.135 [2024-07-12 09:27:12.227539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.248459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.248520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:26.135 [2024-07-12 09:27:12.248543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.868 ms 00:21:26.135 [2024-07-12 09:27:12.248555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.262055] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:26.135 [2024-07-12 09:27:12.276174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.276261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:26.135 [2024-07-12 09:27:12.276283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.453 ms 00:21:26.135 [2024-07-12 09:27:12.276296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.341791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.341866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:26.135 [2024-07-12 09:27:12.341887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.358 ms 00:21:26.135 [2024-07-12 09:27:12.341901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.342218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.342247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:26.135 [2024-07-12 09:27:12.342262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:21:26.135 [2024-07-12 09:27:12.342278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.373868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.373921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:26.135 [2024-07-12 09:27:12.373941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.547 ms 00:21:26.135 [2024-07-12 09:27:12.373955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.405123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.405200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:26.135 [2024-07-12 09:27:12.405223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.067 ms 00:21:26.135 [2024-07-12 09:27:12.405237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.135 [2024-07-12 09:27:12.406047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.135 [2024-07-12 09:27:12.406088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:26.135 [2024-07-12 09:27:12.406105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:21:26.135 [2024-07-12 09:27:12.406119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.493782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.394 [2024-07-12 09:27:12.493856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:26.394 [2024-07-12 09:27:12.493879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.614 ms 00:21:26.394 [2024-07-12 09:27:12.493897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.526760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.394 [2024-07-12 09:27:12.526832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:26.394 [2024-07-12 09:27:12.526855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.727 ms 00:21:26.394 [2024-07-12 09:27:12.526873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.559170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.394 [2024-07-12 09:27:12.559248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:26.394 [2024-07-12 09:27:12.559269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.160 ms 00:21:26.394 [2024-07-12 09:27:12.559283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.591206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.394 [2024-07-12 09:27:12.591267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:26.394 [2024-07-12 09:27:12.591288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.806 ms 00:21:26.394 [2024-07-12 09:27:12.591302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.591425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.394 [2024-07-12 09:27:12.591456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:26.394 [2024-07-12 09:27:12.591470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:26.394 [2024-07-12 09:27:12.591487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.591583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.394 [2024-07-12 09:27:12.591603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:26.394 [2024-07-12 09:27:12.591616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:26.394 [2024-07-12 09:27:12.591652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.394 [2024-07-12 09:27:12.592632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.394 [2024-07-12 09:27:12.596808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2472.337 ms, result 0 00:21:26.394 [2024-07-12 09:27:12.597672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:26.394 { 00:21:26.394 "name": "ftl0", 00:21:26.394 "uuid": "a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7" 00:21:26.394 } 00:21:26.394 09:27:12 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:26.394 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:21:26.394 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:26.394 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:21:26.394 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:26.394 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:26.394 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:26.652 09:27:12 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:26.910 [ 00:21:26.910 { 00:21:26.910 "name": "ftl0", 00:21:26.910 "aliases": [ 00:21:26.910 "a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7" 00:21:26.910 ], 00:21:26.910 "product_name": "FTL disk", 00:21:26.910 "block_size": 4096, 00:21:26.910 "num_blocks": 23592960, 00:21:26.910 "uuid": "a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7", 00:21:26.910 "assigned_rate_limits": { 00:21:26.910 "rw_ios_per_sec": 0, 00:21:26.910 "rw_mbytes_per_sec": 0, 00:21:26.910 "r_mbytes_per_sec": 0, 00:21:26.910 "w_mbytes_per_sec": 0 00:21:26.910 }, 00:21:26.910 "claimed": false, 00:21:26.910 "zoned": false, 00:21:26.910 "supported_io_types": { 00:21:26.910 "read": true, 00:21:26.910 "write": true, 00:21:26.910 "unmap": true, 00:21:26.910 "flush": true, 00:21:26.910 "reset": false, 00:21:26.910 "nvme_admin": false, 00:21:26.910 "nvme_io": false, 00:21:26.910 "nvme_io_md": false, 00:21:26.910 "write_zeroes": true, 00:21:26.910 "zcopy": false, 00:21:26.910 "get_zone_info": false, 00:21:26.910 "zone_management": false, 00:21:26.910 "zone_append": false, 00:21:26.910 "compare": false, 00:21:26.910 "compare_and_write": false, 00:21:26.910 "abort": false, 00:21:26.910 "seek_hole": false, 00:21:26.910 "seek_data": false, 00:21:26.910 "copy": false, 00:21:26.910 "nvme_iov_md": false 00:21:26.910 }, 00:21:26.910 "driver_specific": { 00:21:26.910 "ftl": { 00:21:26.910 "base_bdev": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:26.911 "cache": "nvc0n1p0" 00:21:26.911 } 00:21:26.911 } 00:21:26.911 } 00:21:26.911 ] 00:21:26.911 09:27:13 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:21:26.911 09:27:13 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:26.911 09:27:13 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:27.168 09:27:13 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:27.168 09:27:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:27.426 09:27:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:27.426 { 00:21:27.426 "name": "ftl0", 00:21:27.426 "aliases": [ 00:21:27.426 "a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7" 00:21:27.426 ], 00:21:27.426 "product_name": "FTL disk", 00:21:27.426 "block_size": 4096, 00:21:27.426 "num_blocks": 23592960, 00:21:27.426 "uuid": "a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7", 00:21:27.426 "assigned_rate_limits": { 00:21:27.426 "rw_ios_per_sec": 0, 00:21:27.426 "rw_mbytes_per_sec": 0, 00:21:27.426 "r_mbytes_per_sec": 0, 00:21:27.426 "w_mbytes_per_sec": 0 00:21:27.426 }, 00:21:27.426 "claimed": false, 00:21:27.426 "zoned": false, 00:21:27.426 "supported_io_types": { 00:21:27.426 "read": true, 00:21:27.426 "write": true, 00:21:27.426 "unmap": true, 00:21:27.426 "flush": true, 00:21:27.426 "reset": false, 00:21:27.426 "nvme_admin": false, 00:21:27.426 "nvme_io": false, 00:21:27.426 "nvme_io_md": false, 00:21:27.426 "write_zeroes": true, 00:21:27.426 "zcopy": false, 00:21:27.426 "get_zone_info": false, 00:21:27.426 "zone_management": false, 00:21:27.426 "zone_append": false, 00:21:27.426 "compare": false, 00:21:27.426 "compare_and_write": false, 00:21:27.426 "abort": false, 00:21:27.426 "seek_hole": false, 00:21:27.426 "seek_data": false, 00:21:27.426 "copy": false, 00:21:27.426 "nvme_iov_md": false 00:21:27.426 }, 00:21:27.426 "driver_specific": { 00:21:27.426 "ftl": { 00:21:27.426 "base_bdev": "2ba89b60-1547-49bb-ba6e-71c49b473d47", 00:21:27.426 "cache": "nvc0n1p0" 00:21:27.426 } 00:21:27.426 } 00:21:27.426 } 00:21:27.426 ]' 00:21:27.426 09:27:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:27.426 09:27:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:27.426 09:27:13 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:27.683 [2024-07-12 09:27:13.997922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.683 [2024-07-12 09:27:13.997988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:27.683 [2024-07-12 09:27:13.998012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:27.683 [2024-07-12 09:27:13.998025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.683 [2024-07-12 09:27:13.998079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:27.683 [2024-07-12 09:27:14.001416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.683 [2024-07-12 09:27:14.001457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:27.683 [2024-07-12 09:27:14.001474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:21:27.683 [2024-07-12 09:27:14.001493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.683 [2024-07-12 09:27:14.002119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.683 [2024-07-12 09:27:14.002158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:27.683 [2024-07-12 09:27:14.002174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:21:27.683 [2024-07-12 09:27:14.002206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.683 [2024-07-12 09:27:14.006046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.684 [2024-07-12 09:27:14.006093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:27.684 [2024-07-12 09:27:14.006110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.799 ms 00:21:27.684 [2024-07-12 09:27:14.006125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.684 [2024-07-12 09:27:14.013890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.684 [2024-07-12 09:27:14.013956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:27.684 [2024-07-12 09:27:14.013975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.687 ms 00:21:27.684 [2024-07-12 09:27:14.013988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.943 [2024-07-12 09:27:14.045926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.943 [2024-07-12 09:27:14.045991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:27.943 [2024-07-12 09:27:14.046012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.813 ms 00:21:27.943 [2024-07-12 09:27:14.046029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.943 [2024-07-12 09:27:14.065468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.943 [2024-07-12 09:27:14.065536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:27.944 [2024-07-12 09:27:14.065556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.322 ms 00:21:27.944 [2024-07-12 09:27:14.065575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.944 [2024-07-12 09:27:14.065835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.944 [2024-07-12 09:27:14.065861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:27.944 [2024-07-12 09:27:14.065875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:21:27.944 [2024-07-12 09:27:14.065889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.944 [2024-07-12 09:27:14.097007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.944 [2024-07-12 09:27:14.097062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:27.944 [2024-07-12 09:27:14.097081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.074 ms 00:21:27.944 [2024-07-12 09:27:14.097095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.944 [2024-07-12 09:27:14.127936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.944 [2024-07-12 09:27:14.127990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:27.944 [2024-07-12 09:27:14.128009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.723 ms 00:21:27.944 [2024-07-12 09:27:14.128025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.944 [2024-07-12 09:27:14.158792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.944 [2024-07-12 09:27:14.158841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:27.944 [2024-07-12 09:27:14.158860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.667 ms 00:21:27.944 [2024-07-12 09:27:14.158873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.944 [2024-07-12 09:27:14.189578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.944 [2024-07-12 09:27:14.189626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:27.944 [2024-07-12 09:27:14.189645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.551 ms 00:21:27.944 [2024-07-12 09:27:14.189658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.944 [2024-07-12 09:27:14.189754] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:27.944 [2024-07-12 09:27:14.189784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.189988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:27.944 [2024-07-12 09:27:14.190550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.190992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:27.945 [2024-07-12 09:27:14.191358] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:27.945 [2024-07-12 09:27:14.191370] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:21:27.945 [2024-07-12 09:27:14.191386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:27.945 [2024-07-12 09:27:14.191400] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:27.945 [2024-07-12 09:27:14.191413] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:27.945 [2024-07-12 09:27:14.191435] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:27.945 [2024-07-12 09:27:14.191448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:27.945 [2024-07-12 09:27:14.191460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:27.945 [2024-07-12 09:27:14.191477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:27.945 [2024-07-12 09:27:14.191488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:27.945 [2024-07-12 09:27:14.191499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:27.945 [2024-07-12 09:27:14.191512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.945 [2024-07-12 09:27:14.191525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:27.945 [2024-07-12 09:27:14.191538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:21:27.945 [2024-07-12 09:27:14.191551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.945 [2024-07-12 09:27:14.208663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.945 [2024-07-12 09:27:14.208711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:27.945 [2024-07-12 09:27:14.208730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.066 ms 00:21:27.945 [2024-07-12 09:27:14.208746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.945 [2024-07-12 09:27:14.209253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.945 [2024-07-12 09:27:14.209282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:27.945 [2024-07-12 09:27:14.209298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:21:27.945 [2024-07-12 09:27:14.209311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.945 [2024-07-12 09:27:14.267860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.945 [2024-07-12 09:27:14.267939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.945 [2024-07-12 09:27:14.267959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.945 [2024-07-12 09:27:14.267973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.945 [2024-07-12 09:27:14.268131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.945 [2024-07-12 09:27:14.268155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.945 [2024-07-12 09:27:14.268168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.945 [2024-07-12 09:27:14.268181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.945 [2024-07-12 09:27:14.268304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.945 [2024-07-12 09:27:14.268329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.945 [2024-07-12 09:27:14.268342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.945 [2024-07-12 09:27:14.268358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.945 [2024-07-12 09:27:14.268398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.945 [2024-07-12 09:27:14.268415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.945 [2024-07-12 09:27:14.268428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.946 [2024-07-12 09:27:14.268440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.373020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.373092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:28.205 [2024-07-12 09:27:14.373112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.373126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.457116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.457213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:28.205 [2024-07-12 09:27:14.457235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.457250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.457375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.457402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:28.205 [2024-07-12 09:27:14.457414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.457430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.457492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.457509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:28.205 [2024-07-12 09:27:14.457521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.457534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.457686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.457711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:28.205 [2024-07-12 09:27:14.457744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.457759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.457836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.457859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:28.205 [2024-07-12 09:27:14.457872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.457885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.457945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.457964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.205 [2024-07-12 09:27:14.457979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.457994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.458060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.205 [2024-07-12 09:27:14.458081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.205 [2024-07-12 09:27:14.458094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.205 [2024-07-12 09:27:14.458108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.205 [2024-07-12 09:27:14.458346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.418 ms, result 0 00:21:28.205 true 00:21:28.205 09:27:14 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80781 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80781 ']' 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80781 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80781 00:21:28.205 killing process with pid 80781 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80781' 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80781 00:21:28.205 09:27:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80781 00:21:33.471 09:27:19 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:34.038 65536+0 records in 00:21:34.038 65536+0 records out 00:21:34.038 268435456 bytes (268 MB, 256 MiB) copied, 1.17173 s, 229 MB/s 00:21:34.038 09:27:20 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:34.038 [2024-07-12 09:27:20.304350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:34.038 [2024-07-12 09:27:20.304519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80976 ] 00:21:34.296 [2024-07-12 09:27:20.483240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.555 [2024-07-12 09:27:20.709358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.813 [2024-07-12 09:27:21.024625] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.813 [2024-07-12 09:27:21.024709] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.072 [2024-07-12 09:27:21.187265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.187338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.072 [2024-07-12 09:27:21.187360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.072 [2024-07-12 09:27:21.187372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.190579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.190625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.072 [2024-07-12 09:27:21.190643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:21:35.072 [2024-07-12 09:27:21.190655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.190812] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.072 [2024-07-12 09:27:21.191796] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.072 [2024-07-12 09:27:21.191840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.191855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.072 [2024-07-12 09:27:21.191868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:21:35.072 [2024-07-12 09:27:21.191879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.193123] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.072 [2024-07-12 09:27:21.209472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.209525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.072 [2024-07-12 09:27:21.209549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.350 ms 00:21:35.072 [2024-07-12 09:27:21.209561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.209686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.209709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.072 [2024-07-12 09:27:21.209722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:35.072 [2024-07-12 09:27:21.209733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.214309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.214368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.072 [2024-07-12 09:27:21.214385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.515 ms 00:21:35.072 [2024-07-12 09:27:21.214397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.214535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.214556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.072 [2024-07-12 09:27:21.214569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:35.072 [2024-07-12 09:27:21.214580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.072 [2024-07-12 09:27:21.214624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.072 [2024-07-12 09:27:21.214640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.072 [2024-07-12 09:27:21.214655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:35.072 [2024-07-12 09:27:21.214666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.073 [2024-07-12 09:27:21.214699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:35.073 [2024-07-12 09:27:21.218993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.073 [2024-07-12 09:27:21.219031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.073 [2024-07-12 09:27:21.219047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.304 ms 00:21:35.073 [2024-07-12 09:27:21.219058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.073 [2024-07-12 09:27:21.219128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.073 [2024-07-12 09:27:21.219147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.073 [2024-07-12 09:27:21.219159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:35.073 [2024-07-12 09:27:21.219169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.073 [2024-07-12 09:27:21.219216] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.073 [2024-07-12 09:27:21.219248] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.073 [2024-07-12 09:27:21.219294] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.073 [2024-07-12 09:27:21.219316] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:35.073 [2024-07-12 09:27:21.219431] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.073 [2024-07-12 09:27:21.219448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.073 [2024-07-12 09:27:21.219462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:35.073 [2024-07-12 09:27:21.219477] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.073 [2024-07-12 09:27:21.219491] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.073 [2024-07-12 09:27:21.219518] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:35.073 [2024-07-12 09:27:21.219529] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.073 [2024-07-12 09:27:21.219540] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.073 [2024-07-12 09:27:21.219551] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.073 [2024-07-12 09:27:21.219563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.073 [2024-07-12 09:27:21.219574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.073 [2024-07-12 09:27:21.219586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:21:35.073 [2024-07-12 09:27:21.219597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.073 [2024-07-12 09:27:21.219720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.073 [2024-07-12 09:27:21.219737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.073 [2024-07-12 09:27:21.219754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:35.073 [2024-07-12 09:27:21.219765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.073 [2024-07-12 09:27:21.219876] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.073 [2024-07-12 09:27:21.219894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.073 [2024-07-12 09:27:21.219906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.073 [2024-07-12 09:27:21.219918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.219930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.073 [2024-07-12 09:27:21.219940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.219950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:35.073 [2024-07-12 09:27:21.219962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.073 [2024-07-12 09:27:21.219973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:35.073 [2024-07-12 09:27:21.219983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.073 [2024-07-12 09:27:21.219993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.073 [2024-07-12 09:27:21.220003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:35.073 [2024-07-12 09:27:21.220012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.073 [2024-07-12 09:27:21.220022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.073 [2024-07-12 09:27:21.220033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:35.073 [2024-07-12 09:27:21.220043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.073 [2024-07-12 09:27:21.220064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.073 [2024-07-12 09:27:21.220109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.073 [2024-07-12 09:27:21.220139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.073 [2024-07-12 09:27:21.220169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.073 [2024-07-12 09:27:21.220226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.073 [2024-07-12 09:27:21.220256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.073 [2024-07-12 09:27:21.220276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.073 [2024-07-12 09:27:21.220286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:35.073 [2024-07-12 09:27:21.220296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.073 [2024-07-12 09:27:21.220306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.073 [2024-07-12 09:27:21.220316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:35.073 [2024-07-12 09:27:21.220326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.073 [2024-07-12 09:27:21.220345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:35.073 [2024-07-12 09:27:21.220355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220365] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.073 [2024-07-12 09:27:21.220375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.073 [2024-07-12 09:27:21.220386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.073 [2024-07-12 09:27:21.220412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.073 [2024-07-12 09:27:21.220424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.073 [2024-07-12 09:27:21.220434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.073 [2024-07-12 09:27:21.220445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.073 [2024-07-12 09:27:21.220455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.073 [2024-07-12 09:27:21.220465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.073 [2024-07-12 09:27:21.220476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.073 [2024-07-12 09:27:21.220490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.073 [2024-07-12 09:27:21.220503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:35.073 [2024-07-12 09:27:21.220514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:35.073 [2024-07-12 09:27:21.220525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:35.073 [2024-07-12 09:27:21.220535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:35.073 [2024-07-12 09:27:21.220546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:35.073 [2024-07-12 09:27:21.220557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:35.073 [2024-07-12 09:27:21.220568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:35.073 [2024-07-12 09:27:21.220579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:35.073 [2024-07-12 09:27:21.220590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:35.073 [2024-07-12 09:27:21.220601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:35.073 [2024-07-12 09:27:21.220612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:35.073 [2024-07-12 09:27:21.220623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:35.073 [2024-07-12 09:27:21.220634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:35.073 [2024-07-12 09:27:21.220645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:35.073 [2024-07-12 09:27:21.220656] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.074 [2024-07-12 09:27:21.220668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.074 [2024-07-12 09:27:21.220681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.074 [2024-07-12 09:27:21.220692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.074 [2024-07-12 09:27:21.220704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.074 [2024-07-12 09:27:21.220715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.074 [2024-07-12 09:27:21.220727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.220738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.074 [2024-07-12 09:27:21.220750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:21:35.074 [2024-07-12 09:27:21.220761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.260148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.260229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.074 [2024-07-12 09:27:21.260260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.313 ms 00:21:35.074 [2024-07-12 09:27:21.260278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.260480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.260501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.074 [2024-07-12 09:27:21.260521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:35.074 [2024-07-12 09:27:21.260532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.298967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.299029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.074 [2024-07-12 09:27:21.299054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.400 ms 00:21:35.074 [2024-07-12 09:27:21.299066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.299232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.299255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.074 [2024-07-12 09:27:21.299269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.074 [2024-07-12 09:27:21.299280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.299648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.299675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.074 [2024-07-12 09:27:21.299689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:21:35.074 [2024-07-12 09:27:21.299700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.299861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.299880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.074 [2024-07-12 09:27:21.299892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:35.074 [2024-07-12 09:27:21.299903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.316352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.316401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.074 [2024-07-12 09:27:21.316420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.419 ms 00:21:35.074 [2024-07-12 09:27:21.316431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.332812] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:35.074 [2024-07-12 09:27:21.332861] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.074 [2024-07-12 09:27:21.332881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.332894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.074 [2024-07-12 09:27:21.332907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.284 ms 00:21:35.074 [2024-07-12 09:27:21.332918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.363023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.363076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.074 [2024-07-12 09:27:21.363094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.999 ms 00:21:35.074 [2024-07-12 09:27:21.363105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.379019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.379063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.074 [2024-07-12 09:27:21.379081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.778 ms 00:21:35.074 [2024-07-12 09:27:21.379092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.394684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.394726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.074 [2024-07-12 09:27:21.394743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.474 ms 00:21:35.074 [2024-07-12 09:27:21.394754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.074 [2024-07-12 09:27:21.395597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.074 [2024-07-12 09:27:21.395628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.074 [2024-07-12 09:27:21.395648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:21:35.074 [2024-07-12 09:27:21.395669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.469239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.469319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.333 [2024-07-12 09:27:21.469341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.534 ms 00:21:35.333 [2024-07-12 09:27:21.469352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.482223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:35.333 [2024-07-12 09:27:21.496341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.496412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.333 [2024-07-12 09:27:21.496432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.831 ms 00:21:35.333 [2024-07-12 09:27:21.496444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.496586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.496607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.333 [2024-07-12 09:27:21.496624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:35.333 [2024-07-12 09:27:21.496635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.496705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.496723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.333 [2024-07-12 09:27:21.496735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:35.333 [2024-07-12 09:27:21.496746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.496779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.496794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.333 [2024-07-12 09:27:21.496806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:35.333 [2024-07-12 09:27:21.496822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.496858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.333 [2024-07-12 09:27:21.496874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.496885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.333 [2024-07-12 09:27:21.496897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:35.333 [2024-07-12 09:27:21.496908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.528219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.528277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.333 [2024-07-12 09:27:21.528305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.279 ms 00:21:35.333 [2024-07-12 09:27:21.528317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.528460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.333 [2024-07-12 09:27:21.528482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.333 [2024-07-12 09:27:21.528495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:35.333 [2024-07-12 09:27:21.528506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.333 [2024-07-12 09:27:21.529414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.333 [2024-07-12 09:27:21.533558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.809 ms, result 0 00:21:35.333 [2024-07-12 09:27:21.534483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:35.333 [2024-07-12 09:27:21.550843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.179  Copying: 24/256 [MB] (24 MBps) Copying: 50/256 [MB] (26 MBps) Copying: 78/256 [MB] (27 MBps) Copying: 104/256 [MB] (26 MBps) Copying: 131/256 [MB] (26 MBps) Copying: 158/256 [MB] (26 MBps) Copying: 183/256 [MB] (25 MBps) Copying: 208/256 [MB] (25 MBps) Copying: 233/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-12 09:27:31.457572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:45.179 [2024-07-12 09:27:31.469505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.179 [2024-07-12 09:27:31.469587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:45.179 [2024-07-12 09:27:31.469623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:45.179 [2024-07-12 09:27:31.469634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.179 [2024-07-12 09:27:31.469667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:45.179 [2024-07-12 09:27:31.473031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.179 [2024-07-12 09:27:31.473063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:45.179 [2024-07-12 09:27:31.473119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:21:45.179 [2024-07-12 09:27:31.473129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.179 [2024-07-12 09:27:31.474872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.179 [2024-07-12 09:27:31.474912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:45.179 [2024-07-12 09:27:31.474944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:21:45.179 [2024-07-12 09:27:31.474954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.180 [2024-07-12 09:27:31.481845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.180 [2024-07-12 09:27:31.481882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:45.180 [2024-07-12 09:27:31.481913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:21:45.180 [2024-07-12 09:27:31.481931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.180 [2024-07-12 09:27:31.488995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.180 [2024-07-12 09:27:31.489029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:45.180 [2024-07-12 09:27:31.489060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.997 ms 00:21:45.180 [2024-07-12 09:27:31.489070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.180 [2024-07-12 09:27:31.520539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.180 [2024-07-12 09:27:31.520772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:45.180 [2024-07-12 09:27:31.520802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.401 ms 00:21:45.180 [2024-07-12 09:27:31.520815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.538841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.439 [2024-07-12 09:27:31.538881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:45.439 [2024-07-12 09:27:31.538913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.954 ms 00:21:45.439 [2024-07-12 09:27:31.538924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.539088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.439 [2024-07-12 09:27:31.539108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:45.439 [2024-07-12 09:27:31.539120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:45.439 [2024-07-12 09:27:31.539130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.568483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.439 [2024-07-12 09:27:31.568543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:45.439 [2024-07-12 09:27:31.568577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.330 ms 00:21:45.439 [2024-07-12 09:27:31.568587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.597902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.439 [2024-07-12 09:27:31.597968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:45.439 [2024-07-12 09:27:31.598002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.251 ms 00:21:45.439 [2024-07-12 09:27:31.598013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.628243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.439 [2024-07-12 09:27:31.628325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:45.439 [2024-07-12 09:27:31.628359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.160 ms 00:21:45.439 [2024-07-12 09:27:31.628370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.660009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.439 [2024-07-12 09:27:31.660068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:45.439 [2024-07-12 09:27:31.660104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.522 ms 00:21:45.439 [2024-07-12 09:27:31.660114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.439 [2024-07-12 09:27:31.660181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:45.439 [2024-07-12 09:27:31.660241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:45.439 [2024-07-12 09:27:31.660803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.660997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:45.440 [2024-07-12 09:27:31.661458] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:45.440 [2024-07-12 09:27:31.661470] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:21:45.440 [2024-07-12 09:27:31.661482] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:45.440 [2024-07-12 09:27:31.661492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:45.440 [2024-07-12 09:27:31.661503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:45.440 [2024-07-12 09:27:31.661530] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:45.440 [2024-07-12 09:27:31.661541] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:45.440 [2024-07-12 09:27:31.661552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:45.440 [2024-07-12 09:27:31.661562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:45.440 [2024-07-12 09:27:31.661572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:45.440 [2024-07-12 09:27:31.661582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:45.440 [2024-07-12 09:27:31.661593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.440 [2024-07-12 09:27:31.661604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:45.440 [2024-07-12 09:27:31.661616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:21:45.440 [2024-07-12 09:27:31.661631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.440 [2024-07-12 09:27:31.678362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.440 [2024-07-12 09:27:31.678430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:45.440 [2024-07-12 09:27:31.678450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.700 ms 00:21:45.440 [2024-07-12 09:27:31.678461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.440 [2024-07-12 09:27:31.678947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.440 [2024-07-12 09:27:31.678965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:45.440 [2024-07-12 09:27:31.678986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:21:45.440 [2024-07-12 09:27:31.678998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.440 [2024-07-12 09:27:31.719599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.440 [2024-07-12 09:27:31.719660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.440 [2024-07-12 09:27:31.719679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.440 [2024-07-12 09:27:31.719690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.440 [2024-07-12 09:27:31.719803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.440 [2024-07-12 09:27:31.719820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.440 [2024-07-12 09:27:31.719839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.440 [2024-07-12 09:27:31.719850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.440 [2024-07-12 09:27:31.719917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.440 [2024-07-12 09:27:31.719936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.440 [2024-07-12 09:27:31.719948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.440 [2024-07-12 09:27:31.719959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.440 [2024-07-12 09:27:31.719985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.440 [2024-07-12 09:27:31.719999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.440 [2024-07-12 09:27:31.720011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.440 [2024-07-12 09:27:31.720028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.819183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.819257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.717 [2024-07-12 09:27:31.819276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.819298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.904718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.904785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.717 [2024-07-12 09:27:31.904820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.904841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.904919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.904937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.717 [2024-07-12 09:27:31.904948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.904959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.904993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.905006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.717 [2024-07-12 09:27:31.905017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.905027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.905157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.905178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.717 [2024-07-12 09:27:31.905191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.905245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.905305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.905324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:45.717 [2024-07-12 09:27:31.905336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.905347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.905401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.905417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.717 [2024-07-12 09:27:31.905430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.905441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.905495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.717 [2024-07-12 09:27:31.905511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.717 [2024-07-12 09:27:31.905524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.717 [2024-07-12 09:27:31.905534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.717 [2024-07-12 09:27:31.905701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.207 ms, result 0 00:21:47.093 00:21:47.093 00:21:47.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.093 09:27:33 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81105 00:21:47.093 09:27:33 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81105 00:21:47.093 09:27:33 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81105 ']' 00:21:47.093 09:27:33 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.093 09:27:33 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:47.093 09:27:33 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.093 09:27:33 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.093 09:27:33 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.093 09:27:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:47.093 [2024-07-12 09:27:33.230607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:47.093 [2024-07-12 09:27:33.230791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81105 ] 00:21:47.093 [2024-07-12 09:27:33.402599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.351 [2024-07-12 09:27:33.589703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.284 09:27:34 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.284 09:27:34 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:21:48.284 09:27:34 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:48.284 [2024-07-12 09:27:34.567645] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.284 [2024-07-12 09:27:34.567728] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.545 [2024-07-12 09:27:34.725553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.545 [2024-07-12 09:27:34.725620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.545 [2024-07-12 09:27:34.725641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:48.545 [2024-07-12 09:27:34.725656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.545 [2024-07-12 09:27:34.728829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.545 [2024-07-12 09:27:34.728880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.545 [2024-07-12 09:27:34.728899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:21:48.545 [2024-07-12 09:27:34.728912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.545 [2024-07-12 09:27:34.729044] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.545 [2024-07-12 09:27:34.730013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.545 [2024-07-12 09:27:34.730056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.545 [2024-07-12 09:27:34.730073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.545 [2024-07-12 09:27:34.730087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:21:48.545 [2024-07-12 09:27:34.730100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.545 [2024-07-12 09:27:34.731343] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:48.545 [2024-07-12 09:27:34.747609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.545 [2024-07-12 09:27:34.747653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:48.545 [2024-07-12 09:27:34.747675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.261 ms 00:21:48.545 [2024-07-12 09:27:34.747688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.545 [2024-07-12 09:27:34.747811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.545 [2024-07-12 09:27:34.747833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:48.546 [2024-07-12 09:27:34.747849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:48.546 [2024-07-12 09:27:34.747861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.752375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.752430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.546 [2024-07-12 09:27:34.752457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:21:48.546 [2024-07-12 09:27:34.752470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.752628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.752650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.546 [2024-07-12 09:27:34.752666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:48.546 [2024-07-12 09:27:34.752678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.752737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.752753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.546 [2024-07-12 09:27:34.752768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:48.546 [2024-07-12 09:27:34.752779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.752818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:48.546 [2024-07-12 09:27:34.757103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.757146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.546 [2024-07-12 09:27:34.757171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:21:48.546 [2024-07-12 09:27:34.757208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.757282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.757307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.546 [2024-07-12 09:27:34.757321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:48.546 [2024-07-12 09:27:34.757338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.757368] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:48.546 [2024-07-12 09:27:34.757396] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:48.546 [2024-07-12 09:27:34.757447] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:48.546 [2024-07-12 09:27:34.757475] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:48.546 [2024-07-12 09:27:34.757581] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:48.546 [2024-07-12 09:27:34.757601] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.546 [2024-07-12 09:27:34.757620] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:48.546 [2024-07-12 09:27:34.757637] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.546 [2024-07-12 09:27:34.757652] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.546 [2024-07-12 09:27:34.757667] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:48.546 [2024-07-12 09:27:34.757678] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.546 [2024-07-12 09:27:34.757692] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:48.546 [2024-07-12 09:27:34.757703] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:48.546 [2024-07-12 09:27:34.757719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.757732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.546 [2024-07-12 09:27:34.757746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:21:48.546 [2024-07-12 09:27:34.757757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.757881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.546 [2024-07-12 09:27:34.757898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.546 [2024-07-12 09:27:34.757912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:48.546 [2024-07-12 09:27:34.757924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.546 [2024-07-12 09:27:34.758045] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.546 [2024-07-12 09:27:34.758071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.546 [2024-07-12 09:27:34.758087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.546 [2024-07-12 09:27:34.758124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.546 [2024-07-12 09:27:34.758165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.546 [2024-07-12 09:27:34.758211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.546 [2024-07-12 09:27:34.758225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:48.546 [2024-07-12 09:27:34.758238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.546 [2024-07-12 09:27:34.758249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.546 [2024-07-12 09:27:34.758262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:48.546 [2024-07-12 09:27:34.758273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.546 [2024-07-12 09:27:34.758297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.546 [2024-07-12 09:27:34.758337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.546 [2024-07-12 09:27:34.758372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.546 [2024-07-12 09:27:34.758410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.546 [2024-07-12 09:27:34.758455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.546 [2024-07-12 09:27:34.758480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.546 [2024-07-12 09:27:34.758492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:48.546 [2024-07-12 09:27:34.758503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.546 [2024-07-12 09:27:34.758515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.546 [2024-07-12 09:27:34.758526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:48.546 [2024-07-12 09:27:34.758539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.546 [2024-07-12 09:27:34.758549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:48.546 [2024-07-12 09:27:34.758562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:48.546 [2024-07-12 09:27:34.758573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.547 [2024-07-12 09:27:34.758588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:48.547 [2024-07-12 09:27:34.758599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:48.547 [2024-07-12 09:27:34.758611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.547 [2024-07-12 09:27:34.758622] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.547 [2024-07-12 09:27:34.758638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.547 [2024-07-12 09:27:34.758650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.547 [2024-07-12 09:27:34.758663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.547 [2024-07-12 09:27:34.758675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.547 [2024-07-12 09:27:34.758688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.547 [2024-07-12 09:27:34.758698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.547 [2024-07-12 09:27:34.758712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.547 [2024-07-12 09:27:34.758723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.547 [2024-07-12 09:27:34.758736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.547 [2024-07-12 09:27:34.758749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.547 [2024-07-12 09:27:34.758765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.758779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:48.547 [2024-07-12 09:27:34.758796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:48.547 [2024-07-12 09:27:34.758808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:48.547 [2024-07-12 09:27:34.758821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:48.547 [2024-07-12 09:27:34.758833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:48.547 [2024-07-12 09:27:34.758847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:48.547 [2024-07-12 09:27:34.758858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:48.547 [2024-07-12 09:27:34.758872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:48.547 [2024-07-12 09:27:34.758883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:48.547 [2024-07-12 09:27:34.758897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.758908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.758922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.758934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.758947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:48.547 [2024-07-12 09:27:34.758959] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.547 [2024-07-12 09:27:34.758973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.758985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.547 [2024-07-12 09:27:34.759001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.547 [2024-07-12 09:27:34.759023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.547 [2024-07-12 09:27:34.759037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.547 [2024-07-12 09:27:34.759050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.759063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.547 [2024-07-12 09:27:34.759075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:21:48.547 [2024-07-12 09:27:34.759088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.791962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.792030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.547 [2024-07-12 09:27:34.792052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.793 ms 00:21:48.547 [2024-07-12 09:27:34.792071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.792278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.792304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.547 [2024-07-12 09:27:34.792318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:48.547 [2024-07-12 09:27:34.792332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.830730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.830796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.547 [2024-07-12 09:27:34.830817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.365 ms 00:21:48.547 [2024-07-12 09:27:34.830831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.830945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.830968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.547 [2024-07-12 09:27:34.830982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.547 [2024-07-12 09:27:34.830996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.831344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.831368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.547 [2024-07-12 09:27:34.831386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:48.547 [2024-07-12 09:27:34.831400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.831562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.831584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.547 [2024-07-12 09:27:34.831597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:48.547 [2024-07-12 09:27:34.831610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.849565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.849625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.547 [2024-07-12 09:27:34.849645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.926 ms 00:21:48.547 [2024-07-12 09:27:34.849659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.547 [2024-07-12 09:27:34.866283] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:48.547 [2024-07-12 09:27:34.866342] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:48.547 [2024-07-12 09:27:34.866363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.547 [2024-07-12 09:27:34.866377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:48.547 [2024-07-12 09:27:34.866391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.551 ms 00:21:48.547 [2024-07-12 09:27:34.866404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.806 [2024-07-12 09:27:34.896710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.806 [2024-07-12 09:27:34.896767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:48.806 [2024-07-12 09:27:34.896789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.211 ms 00:21:48.806 [2024-07-12 09:27:34.896814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.806 [2024-07-12 09:27:34.912767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.806 [2024-07-12 09:27:34.912816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:48.806 [2024-07-12 09:27:34.912845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.852 ms 00:21:48.806 [2024-07-12 09:27:34.912863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.806 [2024-07-12 09:27:34.928480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.806 [2024-07-12 09:27:34.928544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:48.806 [2024-07-12 09:27:34.928563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.526 ms 00:21:48.807 [2024-07-12 09:27:34.928576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:34.929388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:34.929428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:48.807 [2024-07-12 09:27:34.929445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:21:48.807 [2024-07-12 09:27:34.929460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.017349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.017435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:48.807 [2024-07-12 09:27:35.017459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.855 ms 00:21:48.807 [2024-07-12 09:27:35.017474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.030398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:48.807 [2024-07-12 09:27:35.044457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.044539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.807 [2024-07-12 09:27:35.044567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.816 ms 00:21:48.807 [2024-07-12 09:27:35.044584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.044724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.044745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:48.807 [2024-07-12 09:27:35.044762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:48.807 [2024-07-12 09:27:35.044774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.044845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.044861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:48.807 [2024-07-12 09:27:35.044876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:48.807 [2024-07-12 09:27:35.044888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.044926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.044940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:48.807 [2024-07-12 09:27:35.044957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:48.807 [2024-07-12 09:27:35.044969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.045010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:48.807 [2024-07-12 09:27:35.045026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.045041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:48.807 [2024-07-12 09:27:35.045054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:48.807 [2024-07-12 09:27:35.045067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.076738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.076804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:48.807 [2024-07-12 09:27:35.076825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.641 ms 00:21:48.807 [2024-07-12 09:27:35.076839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.076975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.807 [2024-07-12 09:27:35.077001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:48.807 [2024-07-12 09:27:35.077015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:48.807 [2024-07-12 09:27:35.077029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.807 [2024-07-12 09:27:35.078041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.807 [2024-07-12 09:27:35.082245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.166 ms, result 0 00:21:48.807 [2024-07-12 09:27:35.083595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:48.807 Some configs were skipped because the RPC state that can call them passed over. 00:21:48.807 09:27:35 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:49.066 [2024-07-12 09:27:35.393813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.066 [2024-07-12 09:27:35.393872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:49.066 [2024-07-12 09:27:35.393899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:21:49.066 [2024-07-12 09:27:35.393912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.066 [2024-07-12 09:27:35.393964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.559 ms, result 0 00:21:49.066 true 00:21:49.066 09:27:35 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:49.633 [2024-07-12 09:27:35.681784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.633 [2024-07-12 09:27:35.681865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:49.633 [2024-07-12 09:27:35.681887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:21:49.633 [2024-07-12 09:27:35.681902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.633 [2024-07-12 09:27:35.681952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.128 ms, result 0 00:21:49.633 true 00:21:49.633 09:27:35 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81105 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81105 ']' 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81105 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81105 00:21:49.633 killing process with pid 81105 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81105' 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81105 00:21:49.633 09:27:35 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81105 00:21:50.571 [2024-07-12 09:27:36.691667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.691735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:50.571 [2024-07-12 09:27:36.691759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:50.571 [2024-07-12 09:27:36.691772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.691808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:50.571 [2024-07-12 09:27:36.695096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.695159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:50.571 [2024-07-12 09:27:36.695177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:21:50.571 [2024-07-12 09:27:36.695207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.695544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.695570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:50.571 [2024-07-12 09:27:36.695584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:50.571 [2024-07-12 09:27:36.695597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.699687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.699736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:50.571 [2024-07-12 09:27:36.699756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.066 ms 00:21:50.571 [2024-07-12 09:27:36.699770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.707340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.707379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:50.571 [2024-07-12 09:27:36.707395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.525 ms 00:21:50.571 [2024-07-12 09:27:36.707411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.719963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.720009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:50.571 [2024-07-12 09:27:36.720026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.469 ms 00:21:50.571 [2024-07-12 09:27:36.720042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.728590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.728638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:50.571 [2024-07-12 09:27:36.728657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.485 ms 00:21:50.571 [2024-07-12 09:27:36.728671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.728830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.728854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:50.571 [2024-07-12 09:27:36.728867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:50.571 [2024-07-12 09:27:36.728894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.741789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.741834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:50.571 [2024-07-12 09:27:36.741851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.870 ms 00:21:50.571 [2024-07-12 09:27:36.741865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.754530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.754574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:50.571 [2024-07-12 09:27:36.754590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.605 ms 00:21:50.571 [2024-07-12 09:27:36.754608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.766851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.766894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:50.571 [2024-07-12 09:27:36.766910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.182 ms 00:21:50.571 [2024-07-12 09:27:36.766923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.779262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.571 [2024-07-12 09:27:36.779305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:50.571 [2024-07-12 09:27:36.779321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.250 ms 00:21:50.571 [2024-07-12 09:27:36.779335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.571 [2024-07-12 09:27:36.779394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:50.571 [2024-07-12 09:27:36.779432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.779998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.780013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:50.571 [2024-07-12 09:27:36.780026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:50.572 [2024-07-12 09:27:36.780828] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:50.572 [2024-07-12 09:27:36.780841] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:21:50.572 [2024-07-12 09:27:36.780860] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:50.572 [2024-07-12 09:27:36.780872] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:50.572 [2024-07-12 09:27:36.780884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:50.572 [2024-07-12 09:27:36.780896] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:50.572 [2024-07-12 09:27:36.780910] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:50.572 [2024-07-12 09:27:36.780922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:50.572 [2024-07-12 09:27:36.780935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:50.572 [2024-07-12 09:27:36.780945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:50.572 [2024-07-12 09:27:36.780969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:50.572 [2024-07-12 09:27:36.780982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.572 [2024-07-12 09:27:36.780995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:50.572 [2024-07-12 09:27:36.781008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:21:50.572 [2024-07-12 09:27:36.781021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.572 [2024-07-12 09:27:36.797533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.572 [2024-07-12 09:27:36.797579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:50.572 [2024-07-12 09:27:36.797597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.470 ms 00:21:50.572 [2024-07-12 09:27:36.797613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.572 [2024-07-12 09:27:36.798076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.572 [2024-07-12 09:27:36.798112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:50.572 [2024-07-12 09:27:36.798131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:21:50.572 [2024-07-12 09:27:36.798148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.572 [2024-07-12 09:27:36.853091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.572 [2024-07-12 09:27:36.853153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.572 [2024-07-12 09:27:36.853172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.572 [2024-07-12 09:27:36.853207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.572 [2024-07-12 09:27:36.853366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.572 [2024-07-12 09:27:36.853389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.572 [2024-07-12 09:27:36.853403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.572 [2024-07-12 09:27:36.853419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.572 [2024-07-12 09:27:36.853484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.572 [2024-07-12 09:27:36.853507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.572 [2024-07-12 09:27:36.853521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.572 [2024-07-12 09:27:36.853537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.572 [2024-07-12 09:27:36.853563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.572 [2024-07-12 09:27:36.853579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.572 [2024-07-12 09:27:36.853591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.572 [2024-07-12 09:27:36.853604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.831 [2024-07-12 09:27:36.952223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.831 [2024-07-12 09:27:36.952297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.831 [2024-07-12 09:27:36.952317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.831 [2024-07-12 09:27:36.952331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.831 [2024-07-12 09:27:37.036516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.831 [2024-07-12 09:27:37.036589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.832 [2024-07-12 09:27:37.036609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.036623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.036732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.832 [2024-07-12 09:27:37.036755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.832 [2024-07-12 09:27:37.036769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.036785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.036820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.832 [2024-07-12 09:27:37.036837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.832 [2024-07-12 09:27:37.036849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.036862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.036987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.832 [2024-07-12 09:27:37.037009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.832 [2024-07-12 09:27:37.037022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.037035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.037085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.832 [2024-07-12 09:27:37.037109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:50.832 [2024-07-12 09:27:37.037122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.037135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.037182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.832 [2024-07-12 09:27:37.037233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.832 [2024-07-12 09:27:37.037246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.037262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.037318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.832 [2024-07-12 09:27:37.037339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.832 [2024-07-12 09:27:37.037352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.832 [2024-07-12 09:27:37.037366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.832 [2024-07-12 09:27:37.037527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.844 ms, result 0 00:21:51.769 09:27:37 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:51.769 09:27:37 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:51.769 [2024-07-12 09:27:38.078832] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:51.769 [2024-07-12 09:27:38.078994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81171 ] 00:21:52.050 [2024-07-12 09:27:38.250734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.308 [2024-07-12 09:27:38.480345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.567 [2024-07-12 09:27:38.796781] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.567 [2024-07-12 09:27:38.796875] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:52.827 [2024-07-12 09:27:38.960815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.960893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:52.827 [2024-07-12 09:27:38.960930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:52.827 [2024-07-12 09:27:38.960942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.964215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.964279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:52.827 [2024-07-12 09:27:38.964298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:21:52.827 [2024-07-12 09:27:38.964309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.964434] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:52.827 [2024-07-12 09:27:38.965396] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:52.827 [2024-07-12 09:27:38.965439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.965454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:52.827 [2024-07-12 09:27:38.965467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:21:52.827 [2024-07-12 09:27:38.965478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.966710] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:52.827 [2024-07-12 09:27:38.984015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.984085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:52.827 [2024-07-12 09:27:38.984112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.306 ms 00:21:52.827 [2024-07-12 09:27:38.984124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.984273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.984298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:52.827 [2024-07-12 09:27:38.984320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:52.827 [2024-07-12 09:27:38.984339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.989095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.989159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:52.827 [2024-07-12 09:27:38.989191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:21:52.827 [2024-07-12 09:27:38.989213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.989342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.989363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:52.827 [2024-07-12 09:27:38.989377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:52.827 [2024-07-12 09:27:38.989388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.989432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.989448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:52.827 [2024-07-12 09:27:38.989460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:52.827 [2024-07-12 09:27:38.989474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.989506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:52.827 [2024-07-12 09:27:38.993981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.994020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:52.827 [2024-07-12 09:27:38.994037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.484 ms 00:21:52.827 [2024-07-12 09:27:38.994048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.994119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.827 [2024-07-12 09:27:38.994138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:52.827 [2024-07-12 09:27:38.994150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:52.827 [2024-07-12 09:27:38.994161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.827 [2024-07-12 09:27:38.994242] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:52.827 [2024-07-12 09:27:38.994276] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:52.827 [2024-07-12 09:27:38.994322] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:52.827 [2024-07-12 09:27:38.994343] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:52.827 [2024-07-12 09:27:38.994449] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:52.827 [2024-07-12 09:27:38.994464] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:52.827 [2024-07-12 09:27:38.994491] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:52.827 [2024-07-12 09:27:38.994506] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:52.827 [2024-07-12 09:27:38.994519] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:52.828 [2024-07-12 09:27:38.994531] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:52.828 [2024-07-12 09:27:38.994546] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:52.828 [2024-07-12 09:27:38.994557] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:52.828 [2024-07-12 09:27:38.994568] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:52.828 [2024-07-12 09:27:38.994579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:38.994592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:52.828 [2024-07-12 09:27:38.994604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:21:52.828 [2024-07-12 09:27:38.994615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:38.994713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:38.994728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:52.828 [2024-07-12 09:27:38.994741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:52.828 [2024-07-12 09:27:38.994756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:38.994871] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:52.828 [2024-07-12 09:27:38.994887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:52.828 [2024-07-12 09:27:38.994899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.828 [2024-07-12 09:27:38.994911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.994923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:52.828 [2024-07-12 09:27:38.994933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.994944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:52.828 [2024-07-12 09:27:38.994955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:52.828 [2024-07-12 09:27:38.994965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:52.828 [2024-07-12 09:27:38.994975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.828 [2024-07-12 09:27:38.994986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:52.828 [2024-07-12 09:27:38.994996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:52.828 [2024-07-12 09:27:38.995005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:52.828 [2024-07-12 09:27:38.995016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:52.828 [2024-07-12 09:27:38.995027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:52.828 [2024-07-12 09:27:38.995036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:52.828 [2024-07-12 09:27:38.995057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:52.828 [2024-07-12 09:27:38.995101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:52.828 [2024-07-12 09:27:38.995132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:52.828 [2024-07-12 09:27:38.995161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:52.828 [2024-07-12 09:27:38.995211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:52.828 [2024-07-12 09:27:38.995242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.828 [2024-07-12 09:27:38.995262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:52.828 [2024-07-12 09:27:38.995273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:52.828 [2024-07-12 09:27:38.995283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:52.828 [2024-07-12 09:27:38.995293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:52.828 [2024-07-12 09:27:38.995303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:52.828 [2024-07-12 09:27:38.995313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:52.828 [2024-07-12 09:27:38.995333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:52.828 [2024-07-12 09:27:38.995343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995353] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:52.828 [2024-07-12 09:27:38.995363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:52.828 [2024-07-12 09:27:38.995374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:52.828 [2024-07-12 09:27:38.995396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:52.828 [2024-07-12 09:27:38.995407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:52.828 [2024-07-12 09:27:38.995431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:52.828 [2024-07-12 09:27:38.995455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:52.828 [2024-07-12 09:27:38.995471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:52.828 [2024-07-12 09:27:38.995482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:52.828 [2024-07-12 09:27:38.995494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:52.828 [2024-07-12 09:27:38.995515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:52.828 [2024-07-12 09:27:38.995539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:52.828 [2024-07-12 09:27:38.995550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:52.828 [2024-07-12 09:27:38.995561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:52.828 [2024-07-12 09:27:38.995572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:52.828 [2024-07-12 09:27:38.995583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:52.828 [2024-07-12 09:27:38.995594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:52.828 [2024-07-12 09:27:38.995605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:52.828 [2024-07-12 09:27:38.995617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:52.828 [2024-07-12 09:27:38.995627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:52.828 [2024-07-12 09:27:38.995684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:52.828 [2024-07-12 09:27:38.995699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:52.828 [2024-07-12 09:27:38.995735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:52.828 [2024-07-12 09:27:38.995747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:52.828 [2024-07-12 09:27:38.995758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:52.828 [2024-07-12 09:27:38.995771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:38.995782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:52.828 [2024-07-12 09:27:38.995794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:21:52.828 [2024-07-12 09:27:38.995806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:39.040922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:39.040989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.828 [2024-07-12 09:27:39.041011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.037 ms 00:21:52.828 [2024-07-12 09:27:39.041024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:39.041253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:39.041292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:52.828 [2024-07-12 09:27:39.041307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:52.828 [2024-07-12 09:27:39.041325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:39.081284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:39.081346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.828 [2024-07-12 09:27:39.081366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.924 ms 00:21:52.828 [2024-07-12 09:27:39.081378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:39.081507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.828 [2024-07-12 09:27:39.081527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.828 [2024-07-12 09:27:39.081541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:52.828 [2024-07-12 09:27:39.081552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.828 [2024-07-12 09:27:39.081882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.081901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.829 [2024-07-12 09:27:39.081913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:21:52.829 [2024-07-12 09:27:39.081924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.829 [2024-07-12 09:27:39.082080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.082102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.829 [2024-07-12 09:27:39.082115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:52.829 [2024-07-12 09:27:39.082126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.829 [2024-07-12 09:27:39.098757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.098812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.829 [2024-07-12 09:27:39.098831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.601 ms 00:21:52.829 [2024-07-12 09:27:39.098843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.829 [2024-07-12 09:27:39.115279] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:52.829 [2024-07-12 09:27:39.115329] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:52.829 [2024-07-12 09:27:39.115349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.115362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:52.829 [2024-07-12 09:27:39.115384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.342 ms 00:21:52.829 [2024-07-12 09:27:39.115395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.829 [2024-07-12 09:27:39.145349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.145399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:52.829 [2024-07-12 09:27:39.145418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.839 ms 00:21:52.829 [2024-07-12 09:27:39.145439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.829 [2024-07-12 09:27:39.161305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.161348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:52.829 [2024-07-12 09:27:39.161366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.752 ms 00:21:52.829 [2024-07-12 09:27:39.161377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.829 [2024-07-12 09:27:39.176986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.829 [2024-07-12 09:27:39.177029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:52.829 [2024-07-12 09:27:39.177045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.516 ms 00:21:52.829 [2024-07-12 09:27:39.177056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.177889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.177929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:53.088 [2024-07-12 09:27:39.177945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:21:53.088 [2024-07-12 09:27:39.177956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.250602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.250675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:53.088 [2024-07-12 09:27:39.250696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.609 ms 00:21:53.088 [2024-07-12 09:27:39.250708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.263457] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:53.088 [2024-07-12 09:27:39.277542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.277606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:53.088 [2024-07-12 09:27:39.277626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.689 ms 00:21:53.088 [2024-07-12 09:27:39.277638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.277772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.277793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:53.088 [2024-07-12 09:27:39.277812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:53.088 [2024-07-12 09:27:39.277823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.277892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.277909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:53.088 [2024-07-12 09:27:39.277921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:53.088 [2024-07-12 09:27:39.277932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.277965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.277980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:53.088 [2024-07-12 09:27:39.277992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:53.088 [2024-07-12 09:27:39.278008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.278051] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:53.088 [2024-07-12 09:27:39.278067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.278079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:53.088 [2024-07-12 09:27:39.278091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:53.088 [2024-07-12 09:27:39.278101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.309322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.309369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:53.088 [2024-07-12 09:27:39.309395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.191 ms 00:21:53.088 [2024-07-12 09:27:39.309407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.309536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.088 [2024-07-12 09:27:39.309558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:53.088 [2024-07-12 09:27:39.309571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:53.088 [2024-07-12 09:27:39.309582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.088 [2024-07-12 09:27:39.310503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:53.088 [2024-07-12 09:27:39.314596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.335 ms, result 0 00:21:53.088 [2024-07-12 09:27:39.315412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:53.088 [2024-07-12 09:27:39.332199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.974  Copying: 28/256 [MB] (28 MBps) Copying: 55/256 [MB] (26 MBps) Copying: 79/256 [MB] (24 MBps) Copying: 104/256 [MB] (24 MBps) Copying: 129/256 [MB] (24 MBps) Copying: 155/256 [MB] (25 MBps) Copying: 181/256 [MB] (26 MBps) Copying: 207/256 [MB] (25 MBps) Copying: 232/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-12 09:27:49.262647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.974 [2024-07-12 09:27:49.275023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.974 [2024-07-12 09:27:49.275074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:02.974 [2024-07-12 09:27:49.275107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.974 [2024-07-12 09:27:49.275119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.974 [2024-07-12 09:27:49.275152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:02.974 [2024-07-12 09:27:49.278489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.974 [2024-07-12 09:27:49.278553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:02.974 [2024-07-12 09:27:49.278569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:22:02.974 [2024-07-12 09:27:49.278585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.974 [2024-07-12 09:27:49.278871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.974 [2024-07-12 09:27:49.278889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:02.974 [2024-07-12 09:27:49.278901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:02.974 [2024-07-12 09:27:49.278913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.974 [2024-07-12 09:27:49.282705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.974 [2024-07-12 09:27:49.282738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:02.974 [2024-07-12 09:27:49.282753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:22:02.974 [2024-07-12 09:27:49.282771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.974 [2024-07-12 09:27:49.290404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.974 [2024-07-12 09:27:49.290439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:02.975 [2024-07-12 09:27:49.290454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.607 ms 00:22:02.975 [2024-07-12 09:27:49.290466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.975 [2024-07-12 09:27:49.321794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.975 [2024-07-12 09:27:49.321843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.975 [2024-07-12 09:27:49.321862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.253 ms 00:22:02.975 [2024-07-12 09:27:49.321874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.234 [2024-07-12 09:27:49.339743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.234 [2024-07-12 09:27:49.339790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:03.234 [2024-07-12 09:27:49.339809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.816 ms 00:22:03.234 [2024-07-12 09:27:49.339820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.234 [2024-07-12 09:27:49.340008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.234 [2024-07-12 09:27:49.340030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:03.234 [2024-07-12 09:27:49.340043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:03.234 [2024-07-12 09:27:49.340055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.234 [2024-07-12 09:27:49.372437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.234 [2024-07-12 09:27:49.372482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:03.234 [2024-07-12 09:27:49.372500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.359 ms 00:22:03.234 [2024-07-12 09:27:49.372511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.234 [2024-07-12 09:27:49.405365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.234 [2024-07-12 09:27:49.405409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:03.234 [2024-07-12 09:27:49.405427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.806 ms 00:22:03.234 [2024-07-12 09:27:49.405439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.234 [2024-07-12 09:27:49.437214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.234 [2024-07-12 09:27:49.437303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:03.234 [2024-07-12 09:27:49.437340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.727 ms 00:22:03.234 [2024-07-12 09:27:49.437351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.234 [2024-07-12 09:27:49.469326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.234 [2024-07-12 09:27:49.469370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.234 [2024-07-12 09:27:49.469387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.893 ms 00:22:03.235 [2024-07-12 09:27:49.469399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.235 [2024-07-12 09:27:49.469445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.235 [2024-07-12 09:27:49.469470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.469989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.235 [2024-07-12 09:27:49.470612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.236 [2024-07-12 09:27:49.470702] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.236 [2024-07-12 09:27:49.470714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:22:03.236 [2024-07-12 09:27:49.470726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.236 [2024-07-12 09:27:49.470737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.236 [2024-07-12 09:27:49.470761] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.236 [2024-07-12 09:27:49.470772] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.236 [2024-07-12 09:27:49.470783] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.236 [2024-07-12 09:27:49.470793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.236 [2024-07-12 09:27:49.470804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.236 [2024-07-12 09:27:49.470814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.236 [2024-07-12 09:27:49.470824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.236 [2024-07-12 09:27:49.470835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.236 [2024-07-12 09:27:49.470846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.236 [2024-07-12 09:27:49.470858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:22:03.236 [2024-07-12 09:27:49.470873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.236 [2024-07-12 09:27:49.487883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.236 [2024-07-12 09:27:49.487924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.236 [2024-07-12 09:27:49.487941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.984 ms 00:22:03.236 [2024-07-12 09:27:49.487953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.236 [2024-07-12 09:27:49.488428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.236 [2024-07-12 09:27:49.488459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.236 [2024-07-12 09:27:49.488481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:22:03.236 [2024-07-12 09:27:49.488493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.236 [2024-07-12 09:27:49.529609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.236 [2024-07-12 09:27:49.529664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.236 [2024-07-12 09:27:49.529681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.236 [2024-07-12 09:27:49.529693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.236 [2024-07-12 09:27:49.529790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.236 [2024-07-12 09:27:49.529807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.236 [2024-07-12 09:27:49.529826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.236 [2024-07-12 09:27:49.529837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.236 [2024-07-12 09:27:49.529899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.236 [2024-07-12 09:27:49.529918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.236 [2024-07-12 09:27:49.529930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.236 [2024-07-12 09:27:49.529940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.236 [2024-07-12 09:27:49.529965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.236 [2024-07-12 09:27:49.529979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.236 [2024-07-12 09:27:49.529991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.236 [2024-07-12 09:27:49.530008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.631088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.631146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.495 [2024-07-12 09:27:49.631166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.631178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.716295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.716364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.495 [2024-07-12 09:27:49.716384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.716404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.716489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.716507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.495 [2024-07-12 09:27:49.716519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.716531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.716567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.716580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.495 [2024-07-12 09:27:49.716592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.716602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.716728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.716749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.495 [2024-07-12 09:27:49.716763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.716774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.716824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.716842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:03.495 [2024-07-12 09:27:49.716854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.716866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.716918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.716935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.495 [2024-07-12 09:27:49.716947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.716958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.717013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.495 [2024-07-12 09:27:49.717030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.495 [2024-07-12 09:27:49.717041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.495 [2024-07-12 09:27:49.717052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.495 [2024-07-12 09:27:49.717255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.198 ms, result 0 00:22:04.869 00:22:04.869 00:22:04.869 09:27:50 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:04.869 09:27:50 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:05.129 09:27:51 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:05.387 [2024-07-12 09:27:51.545534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:05.387 [2024-07-12 09:27:51.545706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81307 ] 00:22:05.387 [2024-07-12 09:27:51.716944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.644 [2024-07-12 09:27:51.945058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.212 [2024-07-12 09:27:52.256576] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.212 [2024-07-12 09:27:52.256655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.212 [2024-07-12 09:27:52.417815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.417879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.212 [2024-07-12 09:27:52.417901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.212 [2024-07-12 09:27:52.417913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.421079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.421124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.212 [2024-07-12 09:27:52.421141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:22:06.212 [2024-07-12 09:27:52.421152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.421289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.212 [2024-07-12 09:27:52.422273] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.212 [2024-07-12 09:27:52.422316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.422331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.212 [2024-07-12 09:27:52.422343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:22:06.212 [2024-07-12 09:27:52.422355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.423559] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.212 [2024-07-12 09:27:52.439644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.439688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.212 [2024-07-12 09:27:52.439710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.086 ms 00:22:06.212 [2024-07-12 09:27:52.439722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.439838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.439869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.212 [2024-07-12 09:27:52.439883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:06.212 [2024-07-12 09:27:52.439895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.444139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.444200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.212 [2024-07-12 09:27:52.444217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:22:06.212 [2024-07-12 09:27:52.444230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.444358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.444379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.212 [2024-07-12 09:27:52.444393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:06.212 [2024-07-12 09:27:52.444404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.444449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.444465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.212 [2024-07-12 09:27:52.444478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:06.212 [2024-07-12 09:27:52.444492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.444524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:06.212 [2024-07-12 09:27:52.448723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.448761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.212 [2024-07-12 09:27:52.448777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:22:06.212 [2024-07-12 09:27:52.448788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.212 [2024-07-12 09:27:52.448855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.212 [2024-07-12 09:27:52.448874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.212 [2024-07-12 09:27:52.448886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:06.212 [2024-07-12 09:27:52.448897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.213 [2024-07-12 09:27:52.448928] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.213 [2024-07-12 09:27:52.448955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.213 [2024-07-12 09:27:52.449002] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.213 [2024-07-12 09:27:52.449024] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:06.213 [2024-07-12 09:27:52.449132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.213 [2024-07-12 09:27:52.449148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.213 [2024-07-12 09:27:52.449163] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:06.213 [2024-07-12 09:27:52.449177] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449221] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449235] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:06.213 [2024-07-12 09:27:52.449251] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.213 [2024-07-12 09:27:52.449262] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.213 [2024-07-12 09:27:52.449272] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.213 [2024-07-12 09:27:52.449285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.213 [2024-07-12 09:27:52.449297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.213 [2024-07-12 09:27:52.449308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:22:06.213 [2024-07-12 09:27:52.449319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.213 [2024-07-12 09:27:52.449417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.213 [2024-07-12 09:27:52.449434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.213 [2024-07-12 09:27:52.449446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:06.213 [2024-07-12 09:27:52.449462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.213 [2024-07-12 09:27:52.449570] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.213 [2024-07-12 09:27:52.449587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.213 [2024-07-12 09:27:52.449599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.213 [2024-07-12 09:27:52.449637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.213 [2024-07-12 09:27:52.449669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.213 [2024-07-12 09:27:52.449689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.213 [2024-07-12 09:27:52.449699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:06.213 [2024-07-12 09:27:52.449709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.213 [2024-07-12 09:27:52.449720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.213 [2024-07-12 09:27:52.449731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:06.213 [2024-07-12 09:27:52.449741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.213 [2024-07-12 09:27:52.449762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.213 [2024-07-12 09:27:52.449807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.213 [2024-07-12 09:27:52.449836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.213 [2024-07-12 09:27:52.449866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.213 [2024-07-12 09:27:52.449895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.213 [2024-07-12 09:27:52.449915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.213 [2024-07-12 09:27:52.449925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:06.213 [2024-07-12 09:27:52.449935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.213 [2024-07-12 09:27:52.449945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.213 [2024-07-12 09:27:52.449955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:06.213 [2024-07-12 09:27:52.449965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.213 [2024-07-12 09:27:52.449975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.213 [2024-07-12 09:27:52.449985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:06.213 [2024-07-12 09:27:52.449994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.450004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.213 [2024-07-12 09:27:52.450015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:06.213 [2024-07-12 09:27:52.450024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.450034] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.213 [2024-07-12 09:27:52.450046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.213 [2024-07-12 09:27:52.450058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.213 [2024-07-12 09:27:52.450069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.213 [2024-07-12 09:27:52.450080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.213 [2024-07-12 09:27:52.450090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.213 [2024-07-12 09:27:52.450100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.213 [2024-07-12 09:27:52.450111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.213 [2024-07-12 09:27:52.450120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.213 [2024-07-12 09:27:52.450131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.213 [2024-07-12 09:27:52.450143] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.213 [2024-07-12 09:27:52.450161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:06.213 [2024-07-12 09:27:52.450201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:06.213 [2024-07-12 09:27:52.450215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:06.213 [2024-07-12 09:27:52.450226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:06.213 [2024-07-12 09:27:52.450237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:06.213 [2024-07-12 09:27:52.450248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:06.213 [2024-07-12 09:27:52.450260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:06.213 [2024-07-12 09:27:52.450271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:06.213 [2024-07-12 09:27:52.450282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:06.213 [2024-07-12 09:27:52.450293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:06.213 [2024-07-12 09:27:52.450349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.213 [2024-07-12 09:27:52.450361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.213 [2024-07-12 09:27:52.450385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.213 [2024-07-12 09:27:52.450397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.213 [2024-07-12 09:27:52.450408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.213 [2024-07-12 09:27:52.450420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.213 [2024-07-12 09:27:52.450432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.213 [2024-07-12 09:27:52.450446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:22:06.213 [2024-07-12 09:27:52.450457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.213 [2024-07-12 09:27:52.491286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.213 [2024-07-12 09:27:52.491350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.213 [2024-07-12 09:27:52.491373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.729 ms 00:22:06.213 [2024-07-12 09:27:52.491385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.213 [2024-07-12 09:27:52.491597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.214 [2024-07-12 09:27:52.491618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.214 [2024-07-12 09:27:52.491632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:06.214 [2024-07-12 09:27:52.491650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.214 [2024-07-12 09:27:52.533829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.214 [2024-07-12 09:27:52.533876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.214 [2024-07-12 09:27:52.533895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.145 ms 00:22:06.214 [2024-07-12 09:27:52.533906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.214 [2024-07-12 09:27:52.534032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.214 [2024-07-12 09:27:52.534052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.214 [2024-07-12 09:27:52.534066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.214 [2024-07-12 09:27:52.534077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.214 [2024-07-12 09:27:52.534429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.214 [2024-07-12 09:27:52.534448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.214 [2024-07-12 09:27:52.534461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:06.214 [2024-07-12 09:27:52.534472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.214 [2024-07-12 09:27:52.534631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.214 [2024-07-12 09:27:52.534659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.214 [2024-07-12 09:27:52.534672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:22:06.214 [2024-07-12 09:27:52.534684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.214 [2024-07-12 09:27:52.552650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.214 [2024-07-12 09:27:52.552711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.214 [2024-07-12 09:27:52.552729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.922 ms 00:22:06.214 [2024-07-12 09:27:52.552741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.570872] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:06.473 [2024-07-12 09:27:52.570919] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:06.473 [2024-07-12 09:27:52.570939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.570952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:06.473 [2024-07-12 09:27:52.570965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.044 ms 00:22:06.473 [2024-07-12 09:27:52.570976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.604258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.604308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:06.473 [2024-07-12 09:27:52.604326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.182 ms 00:22:06.473 [2024-07-12 09:27:52.604338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.621704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.621750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:06.473 [2024-07-12 09:27:52.621768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.266 ms 00:22:06.473 [2024-07-12 09:27:52.621779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.638875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.638933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:06.473 [2024-07-12 09:27:52.638950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.002 ms 00:22:06.473 [2024-07-12 09:27:52.638961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.639789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.639828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.473 [2024-07-12 09:27:52.639844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:22:06.473 [2024-07-12 09:27:52.639856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.712704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.712785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:06.473 [2024-07-12 09:27:52.712807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.812 ms 00:22:06.473 [2024-07-12 09:27:52.712819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.726403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:06.473 [2024-07-12 09:27:52.740788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.740873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.473 [2024-07-12 09:27:52.740909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.826 ms 00:22:06.473 [2024-07-12 09:27:52.740920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.741060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.741080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:06.473 [2024-07-12 09:27:52.741099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:06.473 [2024-07-12 09:27:52.741111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.741180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.741223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.473 [2024-07-12 09:27:52.741241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:06.473 [2024-07-12 09:27:52.741252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.741287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.741301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.473 [2024-07-12 09:27:52.741314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:06.473 [2024-07-12 09:27:52.741330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.741366] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:06.473 [2024-07-12 09:27:52.741385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.741398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:06.473 [2024-07-12 09:27:52.741411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:06.473 [2024-07-12 09:27:52.741423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.774147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.774214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.473 [2024-07-12 09:27:52.774241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.690 ms 00:22:06.473 [2024-07-12 09:27:52.774253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.774381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.473 [2024-07-12 09:27:52.774402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.473 [2024-07-12 09:27:52.774415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:06.473 [2024-07-12 09:27:52.774426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.473 [2024-07-12 09:27:52.775558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.474 [2024-07-12 09:27:52.779687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.384 ms, result 0 00:22:06.474 [2024-07-12 09:27:52.780426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:06.474 [2024-07-12 09:27:52.797061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.732  Copying: 4096/4096 [kB] (average 26 MBps)[2024-07-12 09:27:52.952788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:06.732 [2024-07-12 09:27:52.965239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:52.965282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:06.732 [2024-07-12 09:27:52.965302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.732 [2024-07-12 09:27:52.965314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:52.965346] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:06.732 [2024-07-12 09:27:52.968816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:52.968870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:06.732 [2024-07-12 09:27:52.968885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.450 ms 00:22:06.732 [2024-07-12 09:27:52.968897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:52.970523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:52.970677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:06.732 [2024-07-12 09:27:52.970800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.596 ms 00:22:06.732 [2024-07-12 09:27:52.970850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:52.974950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:52.974990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:06.732 [2024-07-12 09:27:52.975006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:22:06.732 [2024-07-12 09:27:52.975025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:52.982588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:52.982622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:06.732 [2024-07-12 09:27:52.982637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.522 ms 00:22:06.732 [2024-07-12 09:27:52.982649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:53.013774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:53.013823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:06.732 [2024-07-12 09:27:53.013841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.047 ms 00:22:06.732 [2024-07-12 09:27:53.013853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:53.031577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:53.031623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:06.732 [2024-07-12 09:27:53.031642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.651 ms 00:22:06.732 [2024-07-12 09:27:53.031654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:53.031838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:53.031860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:06.732 [2024-07-12 09:27:53.031874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:06.732 [2024-07-12 09:27:53.031886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.732 [2024-07-12 09:27:53.063081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.732 [2024-07-12 09:27:53.063129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:06.732 [2024-07-12 09:27:53.063146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.172 ms 00:22:06.732 [2024-07-12 09:27:53.063158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.991 [2024-07-12 09:27:53.094922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.991 [2024-07-12 09:27:53.094969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:06.991 [2024-07-12 09:27:53.094987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.678 ms 00:22:06.991 [2024-07-12 09:27:53.094999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.991 [2024-07-12 09:27:53.125678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.991 [2024-07-12 09:27:53.125727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:06.991 [2024-07-12 09:27:53.125745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.611 ms 00:22:06.991 [2024-07-12 09:27:53.125756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.991 [2024-07-12 09:27:53.157188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.991 [2024-07-12 09:27:53.157247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:06.991 [2024-07-12 09:27:53.157265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.325 ms 00:22:06.991 [2024-07-12 09:27:53.157277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.991 [2024-07-12 09:27:53.157347] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:06.991 [2024-07-12 09:27:53.157374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:06.991 [2024-07-12 09:27:53.157949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.157960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.157972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.157983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.157995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:06.992 [2024-07-12 09:27:53.158592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:06.992 [2024-07-12 09:27:53.158604] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:22:06.992 [2024-07-12 09:27:53.158615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:06.992 [2024-07-12 09:27:53.158626] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:06.992 [2024-07-12 09:27:53.158650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:06.992 [2024-07-12 09:27:53.158661] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:06.992 [2024-07-12 09:27:53.158671] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:06.992 [2024-07-12 09:27:53.158682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:06.992 [2024-07-12 09:27:53.158693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:06.992 [2024-07-12 09:27:53.158703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:06.992 [2024-07-12 09:27:53.158713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:06.992 [2024-07-12 09:27:53.158724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.992 [2024-07-12 09:27:53.158735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:06.992 [2024-07-12 09:27:53.158747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:22:06.992 [2024-07-12 09:27:53.158762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.992 [2024-07-12 09:27:53.176019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.992 [2024-07-12 09:27:53.176061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:06.992 [2024-07-12 09:27:53.176079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.229 ms 00:22:06.992 [2024-07-12 09:27:53.176092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.992 [2024-07-12 09:27:53.176581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.992 [2024-07-12 09:27:53.176613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:06.992 [2024-07-12 09:27:53.176635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:22:06.992 [2024-07-12 09:27:53.176646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.992 [2024-07-12 09:27:53.217687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.992 [2024-07-12 09:27:53.217741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.992 [2024-07-12 09:27:53.217758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.992 [2024-07-12 09:27:53.217769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.992 [2024-07-12 09:27:53.217865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.992 [2024-07-12 09:27:53.217882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.992 [2024-07-12 09:27:53.217902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.992 [2024-07-12 09:27:53.217913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.992 [2024-07-12 09:27:53.217973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.992 [2024-07-12 09:27:53.217991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.992 [2024-07-12 09:27:53.218003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.992 [2024-07-12 09:27:53.218014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.992 [2024-07-12 09:27:53.218038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.992 [2024-07-12 09:27:53.218051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.992 [2024-07-12 09:27:53.218062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.993 [2024-07-12 09:27:53.218079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.993 [2024-07-12 09:27:53.319241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.993 [2024-07-12 09:27:53.319311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.993 [2024-07-12 09:27:53.319331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.993 [2024-07-12 09:27:53.319343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.403719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.403789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.251 [2024-07-12 09:27:53.403816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.403828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.403913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.403930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.251 [2024-07-12 09:27:53.403942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.403962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.403998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.404011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.251 [2024-07-12 09:27:53.404022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.404034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.404158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.404177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.251 [2024-07-12 09:27:53.404216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.404229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.404279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.404305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:07.251 [2024-07-12 09:27:53.404316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.404328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.404379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.404395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.251 [2024-07-12 09:27:53.404406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.404417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.404470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.251 [2024-07-12 09:27:53.404487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.251 [2024-07-12 09:27:53.404498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.251 [2024-07-12 09:27:53.404509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.251 [2024-07-12 09:27:53.404686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.451 ms, result 0 00:22:08.183 00:22:08.183 00:22:08.183 09:27:54 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81344 00:22:08.183 09:27:54 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:08.183 09:27:54 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81344 00:22:08.183 09:27:54 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81344 ']' 00:22:08.183 09:27:54 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.183 09:27:54 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.183 09:27:54 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.183 09:27:54 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.183 09:27:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:08.441 [2024-07-12 09:27:54.643121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:08.441 [2024-07-12 09:27:54.643313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81344 ] 00:22:08.699 [2024-07-12 09:27:54.814394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.699 [2024-07-12 09:27:55.043111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.632 09:27:55 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.632 09:27:55 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:22:09.632 09:27:55 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:09.891 [2024-07-12 09:27:56.052770] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:09.891 [2024-07-12 09:27:56.052855] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:09.891 [2024-07-12 09:27:56.229233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.891 [2024-07-12 09:27:56.229302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:09.891 [2024-07-12 09:27:56.229324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:09.891 [2024-07-12 09:27:56.229339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.891 [2024-07-12 09:27:56.232464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.891 [2024-07-12 09:27:56.232515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.891 [2024-07-12 09:27:56.232535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:22:09.891 [2024-07-12 09:27:56.232549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.891 [2024-07-12 09:27:56.232675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:09.891 [2024-07-12 09:27:56.233619] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:09.891 [2024-07-12 09:27:56.233660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.891 [2024-07-12 09:27:56.233679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.891 [2024-07-12 09:27:56.233693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:22:09.891 [2024-07-12 09:27:56.233707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.891 [2024-07-12 09:27:56.234890] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:10.150 [2024-07-12 09:27:56.251742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.251801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:10.150 [2024-07-12 09:27:56.251826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.848 ms 00:22:10.150 [2024-07-12 09:27:56.251840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.251961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.251982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:10.150 [2024-07-12 09:27:56.251998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:10.150 [2024-07-12 09:27:56.252010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.256479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.256530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.150 [2024-07-12 09:27:56.256556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:22:10.150 [2024-07-12 09:27:56.256569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.256714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.256735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.150 [2024-07-12 09:27:56.256751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:10.150 [2024-07-12 09:27:56.256764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.256811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.256826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:10.150 [2024-07-12 09:27:56.256840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:10.150 [2024-07-12 09:27:56.256851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.256889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:10.150 [2024-07-12 09:27:56.261212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.261295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.150 [2024-07-12 09:27:56.261313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:22:10.150 [2024-07-12 09:27:56.261327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.261395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.261420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:10.150 [2024-07-12 09:27:56.261433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:10.150 [2024-07-12 09:27:56.261450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.150 [2024-07-12 09:27:56.261478] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:10.150 [2024-07-12 09:27:56.261507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:10.150 [2024-07-12 09:27:56.261556] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:10.150 [2024-07-12 09:27:56.261583] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:10.150 [2024-07-12 09:27:56.261688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:10.150 [2024-07-12 09:27:56.261708] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:10.150 [2024-07-12 09:27:56.261727] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:10.150 [2024-07-12 09:27:56.261744] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:10.150 [2024-07-12 09:27:56.261759] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:10.150 [2024-07-12 09:27:56.261773] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:10.150 [2024-07-12 09:27:56.261784] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:10.150 [2024-07-12 09:27:56.261797] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:10.150 [2024-07-12 09:27:56.261808] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:10.150 [2024-07-12 09:27:56.261824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.150 [2024-07-12 09:27:56.261836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:10.150 [2024-07-12 09:27:56.261851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:22:10.151 [2024-07-12 09:27:56.261863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.261986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.262003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:10.151 [2024-07-12 09:27:56.262017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:10.151 [2024-07-12 09:27:56.262028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.262148] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:10.151 [2024-07-12 09:27:56.262167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:10.151 [2024-07-12 09:27:56.262203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:10.151 [2024-07-12 09:27:56.262248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:10.151 [2024-07-12 09:27:56.262292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.151 [2024-07-12 09:27:56.262315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:10.151 [2024-07-12 09:27:56.262326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:10.151 [2024-07-12 09:27:56.262339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.151 [2024-07-12 09:27:56.262350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:10.151 [2024-07-12 09:27:56.262363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:10.151 [2024-07-12 09:27:56.262374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:10.151 [2024-07-12 09:27:56.262397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:10.151 [2024-07-12 09:27:56.262435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:10.151 [2024-07-12 09:27:56.262470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:10.151 [2024-07-12 09:27:56.262508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:10.151 [2024-07-12 09:27:56.262553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:10.151 [2024-07-12 09:27:56.262589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.151 [2024-07-12 09:27:56.262613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:10.151 [2024-07-12 09:27:56.262623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:10.151 [2024-07-12 09:27:56.262636] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.151 [2024-07-12 09:27:56.262646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:10.151 [2024-07-12 09:27:56.262658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:10.151 [2024-07-12 09:27:56.262669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:10.151 [2024-07-12 09:27:56.262694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:10.151 [2024-07-12 09:27:56.262707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262717] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:10.151 [2024-07-12 09:27:56.262734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:10.151 [2024-07-12 09:27:56.262745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.151 [2024-07-12 09:27:56.262770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:10.151 [2024-07-12 09:27:56.262783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:10.151 [2024-07-12 09:27:56.262794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:10.151 [2024-07-12 09:27:56.262808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:10.151 [2024-07-12 09:27:56.262818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:10.151 [2024-07-12 09:27:56.262831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:10.151 [2024-07-12 09:27:56.262844] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:10.151 [2024-07-12 09:27:56.262861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.262875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:10.151 [2024-07-12 09:27:56.262893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:10.151 [2024-07-12 09:27:56.262904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:10.151 [2024-07-12 09:27:56.262918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:10.151 [2024-07-12 09:27:56.262930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:10.151 [2024-07-12 09:27:56.262943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:10.151 [2024-07-12 09:27:56.262955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:10.151 [2024-07-12 09:27:56.262968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:10.151 [2024-07-12 09:27:56.262980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:10.151 [2024-07-12 09:27:56.262993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.263005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.263018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.263030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.263043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:10.151 [2024-07-12 09:27:56.263056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:10.151 [2024-07-12 09:27:56.263070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.263083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:10.151 [2024-07-12 09:27:56.263100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:10.151 [2024-07-12 09:27:56.263113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:10.151 [2024-07-12 09:27:56.263126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:10.151 [2024-07-12 09:27:56.263140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.263154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:10.151 [2024-07-12 09:27:56.263166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:22:10.151 [2024-07-12 09:27:56.263179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.297255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.297321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.151 [2024-07-12 09:27:56.297344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.977 ms 00:22:10.151 [2024-07-12 09:27:56.297364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.297546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.297569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:10.151 [2024-07-12 09:27:56.297584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:10.151 [2024-07-12 09:27:56.297597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.337058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.337131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.151 [2024-07-12 09:27:56.337152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.429 ms 00:22:10.151 [2024-07-12 09:27:56.337167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.337304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.337330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:10.151 [2024-07-12 09:27:56.337345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:10.151 [2024-07-12 09:27:56.337359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.337677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.337706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:10.151 [2024-07-12 09:27:56.337727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:22:10.151 [2024-07-12 09:27:56.337741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.337891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.151 [2024-07-12 09:27:56.337913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:10.151 [2024-07-12 09:27:56.337926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:22:10.151 [2024-07-12 09:27:56.337940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.151 [2024-07-12 09:27:56.356310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.152 [2024-07-12 09:27:56.356365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:10.152 [2024-07-12 09:27:56.356386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.342 ms 00:22:10.152 [2024-07-12 09:27:56.356400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.152 [2024-07-12 09:27:56.373106] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:10.152 [2024-07-12 09:27:56.373156] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:10.152 [2024-07-12 09:27:56.373176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.152 [2024-07-12 09:27:56.373216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:10.152 [2024-07-12 09:27:56.373233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.623 ms 00:22:10.152 [2024-07-12 09:27:56.373247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.152 [2024-07-12 09:27:56.403121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.152 [2024-07-12 09:27:56.403171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:10.152 [2024-07-12 09:27:56.403207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.777 ms 00:22:10.152 [2024-07-12 09:27:56.403225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.152 [2024-07-12 09:27:56.419072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.152 [2024-07-12 09:27:56.419118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:10.152 [2024-07-12 09:27:56.419147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.771 ms 00:22:10.152 [2024-07-12 09:27:56.419165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.152 [2024-07-12 09:27:56.435306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.152 [2024-07-12 09:27:56.435351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:10.152 [2024-07-12 09:27:56.435369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.037 ms 00:22:10.152 [2024-07-12 09:27:56.435383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.152 [2024-07-12 09:27:56.436181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.152 [2024-07-12 09:27:56.436240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:10.152 [2024-07-12 09:27:56.436257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:22:10.152 [2024-07-12 09:27:56.436271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.519325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.519420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:10.463 [2024-07-12 09:27:56.519469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.014 ms 00:22:10.463 [2024-07-12 09:27:56.519486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.533502] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:10.463 [2024-07-12 09:27:56.549002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.549070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:10.463 [2024-07-12 09:27:56.549114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.360 ms 00:22:10.463 [2024-07-12 09:27:56.549130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.549314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.549337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:10.463 [2024-07-12 09:27:56.549353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:10.463 [2024-07-12 09:27:56.549365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.549434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.549450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:10.463 [2024-07-12 09:27:56.549465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:10.463 [2024-07-12 09:27:56.549477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.549515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.549529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:10.463 [2024-07-12 09:27:56.549547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:10.463 [2024-07-12 09:27:56.549558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.549599] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:10.463 [2024-07-12 09:27:56.549616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.549632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:10.463 [2024-07-12 09:27:56.549644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:10.463 [2024-07-12 09:27:56.549657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.583008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.583055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:10.463 [2024-07-12 09:27:56.583091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.320 ms 00:22:10.463 [2024-07-12 09:27:56.583106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.583260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.463 [2024-07-12 09:27:56.583287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:10.463 [2024-07-12 09:27:56.583302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:10.463 [2024-07-12 09:27:56.583316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.463 [2024-07-12 09:27:56.584267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:10.463 [2024-07-12 09:27:56.588762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.714 ms, result 0 00:22:10.463 [2024-07-12 09:27:56.589695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:10.463 Some configs were skipped because the RPC state that can call them passed over. 00:22:10.463 09:27:56 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:10.736 [2024-07-12 09:27:56.847863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.736 [2024-07-12 09:27:56.848087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:10.736 [2024-07-12 09:27:56.848243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:22:10.736 [2024-07-12 09:27:56.848367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.736 [2024-07-12 09:27:56.848553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.986 ms, result 0 00:22:10.736 true 00:22:10.736 09:27:56 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:10.994 [2024-07-12 09:27:57.131966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.994 [2024-07-12 09:27:57.132038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:10.994 [2024-07-12 09:27:57.132059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:22:10.994 [2024-07-12 09:27:57.132073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.994 [2024-07-12 09:27:57.132123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.177 ms, result 0 00:22:10.994 true 00:22:10.994 09:27:57 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81344 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81344 ']' 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81344 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81344 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.994 killing process with pid 81344 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81344' 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81344 00:22:10.994 09:27:57 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81344 00:22:11.929 [2024-07-12 09:27:58.123738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.929 [2024-07-12 09:27:58.123808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.929 [2024-07-12 09:27:58.123832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:11.929 [2024-07-12 09:27:58.123845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.929 [2024-07-12 09:27:58.123879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:11.929 [2024-07-12 09:27:58.127179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.929 [2024-07-12 09:27:58.127227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.929 [2024-07-12 09:27:58.127244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:22:11.930 [2024-07-12 09:27:58.127260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.127558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.127586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.930 [2024-07-12 09:27:58.127600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:22:11.930 [2024-07-12 09:27:58.127613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.131827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.131877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.930 [2024-07-12 09:27:58.131897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:22:11.930 [2024-07-12 09:27:58.131911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.139473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.139515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:11.930 [2024-07-12 09:27:58.139531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:22:11.930 [2024-07-12 09:27:58.139546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.152385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.152444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:11.930 [2024-07-12 09:27:58.152463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.781 ms 00:22:11.930 [2024-07-12 09:27:58.152479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.160896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.160944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:11.930 [2024-07-12 09:27:58.160964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.372 ms 00:22:11.930 [2024-07-12 09:27:58.160978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.161117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.161139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:11.930 [2024-07-12 09:27:58.161152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:11.930 [2024-07-12 09:27:58.161179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.174613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.174658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:11.930 [2024-07-12 09:27:58.174675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.384 ms 00:22:11.930 [2024-07-12 09:27:58.174690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.187669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.187713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:11.930 [2024-07-12 09:27:58.187730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.934 ms 00:22:11.930 [2024-07-12 09:27:58.187749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.200757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.200800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:11.930 [2024-07-12 09:27:58.200816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.964 ms 00:22:11.930 [2024-07-12 09:27:58.200830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.213514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.930 [2024-07-12 09:27:58.213557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:11.930 [2024-07-12 09:27:58.213573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.611 ms 00:22:11.930 [2024-07-12 09:27:58.213587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.930 [2024-07-12 09:27:58.213630] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:11.930 [2024-07-12 09:27:58.213658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.213993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:11.930 [2024-07-12 09:27:58.214402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.214988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.215002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:11.931 [2024-07-12 09:27:58.215026] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:11.931 [2024-07-12 09:27:58.215038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:22:11.931 [2024-07-12 09:27:58.215057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:11.931 [2024-07-12 09:27:58.215069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:11.931 [2024-07-12 09:27:58.215081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:11.931 [2024-07-12 09:27:58.215093] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:11.931 [2024-07-12 09:27:58.215106] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:11.931 [2024-07-12 09:27:58.215118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:11.931 [2024-07-12 09:27:58.215131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:11.931 [2024-07-12 09:27:58.215142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:11.931 [2024-07-12 09:27:58.215167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:11.931 [2024-07-12 09:27:58.215179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.931 [2024-07-12 09:27:58.215206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:11.931 [2024-07-12 09:27:58.215221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:22:11.931 [2024-07-12 09:27:58.215235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.931 [2024-07-12 09:27:58.232272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.931 [2024-07-12 09:27:58.232320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:11.931 [2024-07-12 09:27:58.232338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.996 ms 00:22:11.931 [2024-07-12 09:27:58.232355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.931 [2024-07-12 09:27:58.232819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.931 [2024-07-12 09:27:58.232850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:11.931 [2024-07-12 09:27:58.232869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:22:11.931 [2024-07-12 09:27:58.232886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.289387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.289455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.190 [2024-07-12 09:27:58.289475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.289489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.289622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.289644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.190 [2024-07-12 09:27:58.289658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.289675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.289739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.289762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.190 [2024-07-12 09:27:58.289775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.289791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.289816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.289832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.190 [2024-07-12 09:27:58.289844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.289858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.392261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.392334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.190 [2024-07-12 09:27:58.392355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.392369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.476600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.476678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.190 [2024-07-12 09:27:58.476699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.476714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.476824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.476846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.190 [2024-07-12 09:27:58.476859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.476875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.476910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.476927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.190 [2024-07-12 09:27:58.476946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.476960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.477089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.477111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.190 [2024-07-12 09:27:58.477124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.477138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.477212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.477236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:12.190 [2024-07-12 09:27:58.477249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.477263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.477312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.477334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.190 [2024-07-12 09:27:58.477347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.477362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.477416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.190 [2024-07-12 09:27:58.477436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.190 [2024-07-12 09:27:58.477451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.190 [2024-07-12 09:27:58.477470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.190 [2024-07-12 09:27:58.477631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.878 ms, result 0 00:22:13.124 09:27:59 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:13.382 [2024-07-12 09:27:59.526207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:13.382 [2024-07-12 09:27:59.526376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81409 ] 00:22:13.382 [2024-07-12 09:27:59.701385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.641 [2024-07-12 09:27:59.930524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.210 [2024-07-12 09:28:00.255113] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:14.210 [2024-07-12 09:28:00.255210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:14.210 [2024-07-12 09:28:00.417535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.417595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:14.210 [2024-07-12 09:28:00.417615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:14.210 [2024-07-12 09:28:00.417633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.420823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.420870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:14.210 [2024-07-12 09:28:00.420887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:22:14.210 [2024-07-12 09:28:00.420899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.421021] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:14.210 [2024-07-12 09:28:00.421977] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:14.210 [2024-07-12 09:28:00.422021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.422035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:14.210 [2024-07-12 09:28:00.422048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:22:14.210 [2024-07-12 09:28:00.422059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.423338] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:14.210 [2024-07-12 09:28:00.439678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.439724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:14.210 [2024-07-12 09:28:00.439749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.340 ms 00:22:14.210 [2024-07-12 09:28:00.439761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.439887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.439920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:14.210 [2024-07-12 09:28:00.439933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:14.210 [2024-07-12 09:28:00.439943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.444360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.444418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:14.210 [2024-07-12 09:28:00.444434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:22:14.210 [2024-07-12 09:28:00.444446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.444593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.444614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:14.210 [2024-07-12 09:28:00.444627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:14.210 [2024-07-12 09:28:00.444638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.444682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.444698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:14.210 [2024-07-12 09:28:00.444710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:14.210 [2024-07-12 09:28:00.444725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.444758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:14.210 [2024-07-12 09:28:00.449041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.449082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:14.210 [2024-07-12 09:28:00.449098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:22:14.210 [2024-07-12 09:28:00.449110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.449179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.210 [2024-07-12 09:28:00.449212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:14.210 [2024-07-12 09:28:00.449225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:14.210 [2024-07-12 09:28:00.449236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.210 [2024-07-12 09:28:00.449269] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:14.210 [2024-07-12 09:28:00.449299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:14.210 [2024-07-12 09:28:00.449345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:14.210 [2024-07-12 09:28:00.449366] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:14.210 [2024-07-12 09:28:00.449472] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:14.210 [2024-07-12 09:28:00.449488] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:14.210 [2024-07-12 09:28:00.449502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:14.210 [2024-07-12 09:28:00.449517] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:14.210 [2024-07-12 09:28:00.449530] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:14.210 [2024-07-12 09:28:00.449542] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:14.210 [2024-07-12 09:28:00.449558] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:14.211 [2024-07-12 09:28:00.449569] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:14.211 [2024-07-12 09:28:00.449579] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:14.211 [2024-07-12 09:28:00.449591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.449602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:14.211 [2024-07-12 09:28:00.449614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:22:14.211 [2024-07-12 09:28:00.449625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.449722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.449737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:14.211 [2024-07-12 09:28:00.449749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:14.211 [2024-07-12 09:28:00.449764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.449872] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:14.211 [2024-07-12 09:28:00.449888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:14.211 [2024-07-12 09:28:00.449900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:14.211 [2024-07-12 09:28:00.449912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.449923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:14.211 [2024-07-12 09:28:00.449934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.449945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:14.211 [2024-07-12 09:28:00.449955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:14.211 [2024-07-12 09:28:00.449965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:14.211 [2024-07-12 09:28:00.449975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:14.211 [2024-07-12 09:28:00.449986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:14.211 [2024-07-12 09:28:00.450006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:14.211 [2024-07-12 09:28:00.450016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:14.211 [2024-07-12 09:28:00.450026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:14.211 [2024-07-12 09:28:00.450037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:14.211 [2024-07-12 09:28:00.450047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:14.211 [2024-07-12 09:28:00.450070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:14.211 [2024-07-12 09:28:00.450114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:14.211 [2024-07-12 09:28:00.450145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:14.211 [2024-07-12 09:28:00.450175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:14.211 [2024-07-12 09:28:00.450225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:14.211 [2024-07-12 09:28:00.450458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:14.211 [2024-07-12 09:28:00.450555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:14.211 [2024-07-12 09:28:00.450573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:14.211 [2024-07-12 09:28:00.450584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:14.211 [2024-07-12 09:28:00.450594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:14.211 [2024-07-12 09:28:00.450604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:14.211 [2024-07-12 09:28:00.450614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:14.211 [2024-07-12 09:28:00.450635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:14.211 [2024-07-12 09:28:00.450644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450654] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:14.211 [2024-07-12 09:28:00.450665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:14.211 [2024-07-12 09:28:00.450676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.211 [2024-07-12 09:28:00.450698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:14.211 [2024-07-12 09:28:00.450710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:14.211 [2024-07-12 09:28:00.450720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:14.211 [2024-07-12 09:28:00.450730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:14.211 [2024-07-12 09:28:00.450740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:14.211 [2024-07-12 09:28:00.450751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:14.211 [2024-07-12 09:28:00.450763] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:14.211 [2024-07-12 09:28:00.450784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:14.211 [2024-07-12 09:28:00.450808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:14.211 [2024-07-12 09:28:00.450819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:14.211 [2024-07-12 09:28:00.450831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:14.211 [2024-07-12 09:28:00.450842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:14.211 [2024-07-12 09:28:00.450853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:14.211 [2024-07-12 09:28:00.450863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:14.211 [2024-07-12 09:28:00.450875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:14.211 [2024-07-12 09:28:00.450886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:14.211 [2024-07-12 09:28:00.450897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:14.211 [2024-07-12 09:28:00.450953] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:14.211 [2024-07-12 09:28:00.450965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:14.211 [2024-07-12 09:28:00.450988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:14.211 [2024-07-12 09:28:00.450999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:14.211 [2024-07-12 09:28:00.451010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:14.211 [2024-07-12 09:28:00.451022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.451034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:14.211 [2024-07-12 09:28:00.451046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:22:14.211 [2024-07-12 09:28:00.451056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.498958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.499016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.211 [2024-07-12 09:28:00.499037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.789 ms 00:22:14.211 [2024-07-12 09:28:00.499052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.499278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.499301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:14.211 [2024-07-12 09:28:00.499315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:14.211 [2024-07-12 09:28:00.499332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.538088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.538142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.211 [2024-07-12 09:28:00.538162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.720 ms 00:22:14.211 [2024-07-12 09:28:00.538174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.538310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.538331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.211 [2024-07-12 09:28:00.538344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:14.211 [2024-07-12 09:28:00.538355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.211 [2024-07-12 09:28:00.538687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.211 [2024-07-12 09:28:00.538706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.211 [2024-07-12 09:28:00.538718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:22:14.212 [2024-07-12 09:28:00.538729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.212 [2024-07-12 09:28:00.538883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.212 [2024-07-12 09:28:00.538905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.212 [2024-07-12 09:28:00.538917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:14.212 [2024-07-12 09:28:00.538927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.212 [2024-07-12 09:28:00.555450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.212 [2024-07-12 09:28:00.555502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.212 [2024-07-12 09:28:00.555520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.493 ms 00:22:14.212 [2024-07-12 09:28:00.555532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.571913] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:14.471 [2024-07-12 09:28:00.571959] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:14.471 [2024-07-12 09:28:00.571978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.571991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:14.471 [2024-07-12 09:28:00.572004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.292 ms 00:22:14.471 [2024-07-12 09:28:00.572016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.602700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.602748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:14.471 [2024-07-12 09:28:00.602766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.589 ms 00:22:14.471 [2024-07-12 09:28:00.602778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.619059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.619108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:14.471 [2024-07-12 09:28:00.619127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.184 ms 00:22:14.471 [2024-07-12 09:28:00.619139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.635248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.635290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:14.471 [2024-07-12 09:28:00.635307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.007 ms 00:22:14.471 [2024-07-12 09:28:00.635318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.636126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.636182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.471 [2024-07-12 09:28:00.636233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:22:14.471 [2024-07-12 09:28:00.636245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.713244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.713361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:14.471 [2024-07-12 09:28:00.713398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.962 ms 00:22:14.471 [2024-07-12 09:28:00.713410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.727460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:14.471 [2024-07-12 09:28:00.742757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.742827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.471 [2024-07-12 09:28:00.742848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.182 ms 00:22:14.471 [2024-07-12 09:28:00.742860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.742997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.743018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:14.471 [2024-07-12 09:28:00.743035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:14.471 [2024-07-12 09:28:00.743046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.743120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.743136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.471 [2024-07-12 09:28:00.743149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:14.471 [2024-07-12 09:28:00.743160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.743214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.743232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.471 [2024-07-12 09:28:00.743244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:14.471 [2024-07-12 09:28:00.743261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.743303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:14.471 [2024-07-12 09:28:00.743320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.743332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:14.471 [2024-07-12 09:28:00.743343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:14.471 [2024-07-12 09:28:00.743354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.777423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.777498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.471 [2024-07-12 09:28:00.777555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.043 ms 00:22:14.471 [2024-07-12 09:28:00.777567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.777694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.471 [2024-07-12 09:28:00.777717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.471 [2024-07-12 09:28:00.777730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:14.471 [2024-07-12 09:28:00.777740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.471 [2024-07-12 09:28:00.778675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.471 [2024-07-12 09:28:00.783092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.812 ms, result 0 00:22:14.471 [2024-07-12 09:28:00.783943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:14.471 [2024-07-12 09:28:00.801325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.826  Copying: 27/256 [MB] (27 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 79/256 [MB] (26 MBps) Copying: 105/256 [MB] (26 MBps) Copying: 132/256 [MB] (26 MBps) Copying: 158/256 [MB] (25 MBps) Copying: 183/256 [MB] (25 MBps) Copying: 207/256 [MB] (23 MBps) Copying: 231/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-12 09:28:11.051916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.826 [2024-07-12 09:28:11.065331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.065391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:24.827 [2024-07-12 09:28:11.065412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:24.827 [2024-07-12 09:28:11.065425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.065458] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:24.827 [2024-07-12 09:28:11.069377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.069426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:24.827 [2024-07-12 09:28:11.069442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:22:24.827 [2024-07-12 09:28:11.069454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.069777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.069797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:24.827 [2024-07-12 09:28:11.069810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:22:24.827 [2024-07-12 09:28:11.069821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.074118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.074155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:24.827 [2024-07-12 09:28:11.074171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:22:24.827 [2024-07-12 09:28:11.074205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.081883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.081922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:24.827 [2024-07-12 09:28:11.081937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.643 ms 00:22:24.827 [2024-07-12 09:28:11.081949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.113769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.113820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:24.827 [2024-07-12 09:28:11.113846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.739 ms 00:22:24.827 [2024-07-12 09:28:11.113872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.131883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.131936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:24.827 [2024-07-12 09:28:11.131955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.938 ms 00:22:24.827 [2024-07-12 09:28:11.131967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.132150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.132171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:24.827 [2024-07-12 09:28:11.132206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:24.827 [2024-07-12 09:28:11.132222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.827 [2024-07-12 09:28:11.164209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.827 [2024-07-12 09:28:11.164266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:24.827 [2024-07-12 09:28:11.164285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.960 ms 00:22:24.827 [2024-07-12 09:28:11.164297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.086 [2024-07-12 09:28:11.196612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.086 [2024-07-12 09:28:11.196673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:25.086 [2024-07-12 09:28:11.196691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.232 ms 00:22:25.086 [2024-07-12 09:28:11.196702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.086 [2024-07-12 09:28:11.228820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.086 [2024-07-12 09:28:11.228879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:25.086 [2024-07-12 09:28:11.228897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.051 ms 00:22:25.086 [2024-07-12 09:28:11.228909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.086 [2024-07-12 09:28:11.261972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.086 [2024-07-12 09:28:11.262017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:25.086 [2024-07-12 09:28:11.262034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.949 ms 00:22:25.086 [2024-07-12 09:28:11.262045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.086 [2024-07-12 09:28:11.262108] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:25.086 [2024-07-12 09:28:11.262133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:25.086 [2024-07-12 09:28:11.262714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.262997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:25.087 [2024-07-12 09:28:11.263361] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:25.087 [2024-07-12 09:28:11.263372] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a224f06c-e4f3-4bb1-bd64-4dc6315ffcd7 00:22:25.087 [2024-07-12 09:28:11.263383] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:25.087 [2024-07-12 09:28:11.263394] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:25.087 [2024-07-12 09:28:11.263419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:25.087 [2024-07-12 09:28:11.263440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:25.087 [2024-07-12 09:28:11.263454] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:25.087 [2024-07-12 09:28:11.263466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:25.087 [2024-07-12 09:28:11.263476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:25.087 [2024-07-12 09:28:11.263486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:25.087 [2024-07-12 09:28:11.263496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:25.087 [2024-07-12 09:28:11.263507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.087 [2024-07-12 09:28:11.263518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:25.087 [2024-07-12 09:28:11.263529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:22:25.087 [2024-07-12 09:28:11.263545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.281982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.087 [2024-07-12 09:28:11.282026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:25.087 [2024-07-12 09:28:11.282043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.409 ms 00:22:25.087 [2024-07-12 09:28:11.282054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.282524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.087 [2024-07-12 09:28:11.282549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:25.087 [2024-07-12 09:28:11.282571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:22:25.087 [2024-07-12 09:28:11.282582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.326739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.087 [2024-07-12 09:28:11.326814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.087 [2024-07-12 09:28:11.326844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.087 [2024-07-12 09:28:11.326857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.326954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.087 [2024-07-12 09:28:11.326971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.087 [2024-07-12 09:28:11.326991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.087 [2024-07-12 09:28:11.327002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.327065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.087 [2024-07-12 09:28:11.327082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.087 [2024-07-12 09:28:11.327094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.087 [2024-07-12 09:28:11.327105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.327129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.087 [2024-07-12 09:28:11.327142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.087 [2024-07-12 09:28:11.327153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.087 [2024-07-12 09:28:11.327170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.087 [2024-07-12 09:28:11.428521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.087 [2024-07-12 09:28:11.428608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.087 [2024-07-12 09:28:11.428627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.087 [2024-07-12 09:28:11.428639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.513404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.513474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.346 [2024-07-12 09:28:11.513492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.513512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.513600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.513616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.346 [2024-07-12 09:28:11.513628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.513639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.513674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.513687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.346 [2024-07-12 09:28:11.513698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.513709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.513835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.513853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.346 [2024-07-12 09:28:11.513866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.513876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.513925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.513942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:25.346 [2024-07-12 09:28:11.513954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.513964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.514017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.514032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.346 [2024-07-12 09:28:11.514044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.514054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.514107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.346 [2024-07-12 09:28:11.514123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.346 [2024-07-12 09:28:11.514134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.346 [2024-07-12 09:28:11.514144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.346 [2024-07-12 09:28:11.514338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.015 ms, result 0 00:22:26.281 00:22:26.281 00:22:26.281 09:28:12 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:27.214 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:27.214 09:28:13 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81344 00:22:27.214 09:28:13 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81344 ']' 00:22:27.214 09:28:13 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81344 00:22:27.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81344) - No such process 00:22:27.214 Process with pid 81344 is not found 00:22:27.214 09:28:13 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81344 is not found' 00:22:27.214 00:22:27.214 real 1m8.215s 00:22:27.214 user 1m32.215s 00:22:27.214 sys 0m6.430s 00:22:27.214 09:28:13 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:27.214 ************************************ 00:22:27.214 09:28:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:27.214 END TEST ftl_trim 00:22:27.214 ************************************ 00:22:27.214 09:28:13 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:27.214 09:28:13 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:27.214 09:28:13 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:27.214 09:28:13 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.214 09:28:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:27.214 ************************************ 00:22:27.214 START TEST ftl_restore 00:22:27.214 ************************************ 00:22:27.214 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:27.214 * Looking for test storage... 00:22:27.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.olSQKu4AFj 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:27.214 09:28:13 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:27.215 09:28:13 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:27.215 09:28:13 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:27.215 09:28:13 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:27.215 09:28:13 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81605 00:22:27.215 09:28:13 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81605 00:22:27.215 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 81605 ']' 00:22:27.215 09:28:13 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:27.215 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.215 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.215 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.215 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.215 09:28:13 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:27.473 [2024-07-12 09:28:13.586734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:27.473 [2024-07-12 09:28:13.586919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81605 ] 00:22:27.473 [2024-07-12 09:28:13.761291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.730 [2024-07-12 09:28:13.990068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.666 09:28:14 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.666 09:28:14 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:22:28.666 09:28:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:28.666 09:28:14 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:28.666 09:28:14 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:28.666 09:28:14 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:28.666 09:28:14 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:28.666 09:28:14 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:28.924 09:28:15 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:28.924 09:28:15 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:28.924 09:28:15 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:28.924 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:28.924 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:28.924 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:28.924 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:28.924 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:29.181 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:29.181 { 00:22:29.181 "name": "nvme0n1", 00:22:29.181 "aliases": [ 00:22:29.181 "0b1da969-a1ad-4d9a-a1b7-9927a3842605" 00:22:29.181 ], 00:22:29.181 "product_name": "NVMe disk", 00:22:29.181 "block_size": 4096, 00:22:29.181 "num_blocks": 1310720, 00:22:29.181 "uuid": "0b1da969-a1ad-4d9a-a1b7-9927a3842605", 00:22:29.181 "assigned_rate_limits": { 00:22:29.181 "rw_ios_per_sec": 0, 00:22:29.181 "rw_mbytes_per_sec": 0, 00:22:29.182 "r_mbytes_per_sec": 0, 00:22:29.182 "w_mbytes_per_sec": 0 00:22:29.182 }, 00:22:29.182 "claimed": true, 00:22:29.182 "claim_type": "read_many_write_one", 00:22:29.182 "zoned": false, 00:22:29.182 "supported_io_types": { 00:22:29.182 "read": true, 00:22:29.182 "write": true, 00:22:29.182 "unmap": true, 00:22:29.182 "flush": true, 00:22:29.182 "reset": true, 00:22:29.182 "nvme_admin": true, 00:22:29.182 "nvme_io": true, 00:22:29.182 "nvme_io_md": false, 00:22:29.182 "write_zeroes": true, 00:22:29.182 "zcopy": false, 00:22:29.182 "get_zone_info": false, 00:22:29.182 "zone_management": false, 00:22:29.182 "zone_append": false, 00:22:29.182 "compare": true, 00:22:29.182 "compare_and_write": false, 00:22:29.182 "abort": true, 00:22:29.182 "seek_hole": false, 00:22:29.182 "seek_data": false, 00:22:29.182 "copy": true, 00:22:29.182 "nvme_iov_md": false 00:22:29.182 }, 00:22:29.182 "driver_specific": { 00:22:29.182 "nvme": [ 00:22:29.182 { 00:22:29.182 "pci_address": "0000:00:11.0", 00:22:29.182 "trid": { 00:22:29.182 "trtype": "PCIe", 00:22:29.182 "traddr": "0000:00:11.0" 00:22:29.182 }, 00:22:29.182 "ctrlr_data": { 00:22:29.182 "cntlid": 0, 00:22:29.182 "vendor_id": "0x1b36", 00:22:29.182 "model_number": "QEMU NVMe Ctrl", 00:22:29.182 "serial_number": "12341", 00:22:29.182 "firmware_revision": "8.0.0", 00:22:29.182 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:29.182 "oacs": { 00:22:29.182 "security": 0, 00:22:29.182 "format": 1, 00:22:29.182 "firmware": 0, 00:22:29.182 "ns_manage": 1 00:22:29.182 }, 00:22:29.182 "multi_ctrlr": false, 00:22:29.182 "ana_reporting": false 00:22:29.182 }, 00:22:29.182 "vs": { 00:22:29.182 "nvme_version": "1.4" 00:22:29.182 }, 00:22:29.182 "ns_data": { 00:22:29.182 "id": 1, 00:22:29.182 "can_share": false 00:22:29.182 } 00:22:29.182 } 00:22:29.182 ], 00:22:29.182 "mp_policy": "active_passive" 00:22:29.182 } 00:22:29.182 } 00:22:29.182 ]' 00:22:29.182 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:29.182 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:29.182 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:29.182 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:29.182 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:29.182 09:28:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:22:29.182 09:28:15 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:29.182 09:28:15 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:29.182 09:28:15 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:29.182 09:28:15 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:29.182 09:28:15 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:29.440 09:28:15 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=02e87b22-9595-46a2-8a33-751f51830a98 00:22:29.440 09:28:15 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:29.440 09:28:15 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02e87b22-9595-46a2-8a33-751f51830a98 00:22:29.721 09:28:15 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:29.984 09:28:16 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ab26e56a-338d-4e2c-8a66-7c960f22dbf6 00:22:29.984 09:28:16 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ab26e56a-338d-4e2c-8a66-7c960f22dbf6 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:30.243 09:28:16 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.243 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.243 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:30.243 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:30.243 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:30.243 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:30.502 { 00:22:30.502 "name": "880e4dc2-dd9a-4b2c-b830-2f520e431968", 00:22:30.502 "aliases": [ 00:22:30.502 "lvs/nvme0n1p0" 00:22:30.502 ], 00:22:30.502 "product_name": "Logical Volume", 00:22:30.502 "block_size": 4096, 00:22:30.502 "num_blocks": 26476544, 00:22:30.502 "uuid": "880e4dc2-dd9a-4b2c-b830-2f520e431968", 00:22:30.502 "assigned_rate_limits": { 00:22:30.502 "rw_ios_per_sec": 0, 00:22:30.502 "rw_mbytes_per_sec": 0, 00:22:30.502 "r_mbytes_per_sec": 0, 00:22:30.502 "w_mbytes_per_sec": 0 00:22:30.502 }, 00:22:30.502 "claimed": false, 00:22:30.502 "zoned": false, 00:22:30.502 "supported_io_types": { 00:22:30.502 "read": true, 00:22:30.502 "write": true, 00:22:30.502 "unmap": true, 00:22:30.502 "flush": false, 00:22:30.502 "reset": true, 00:22:30.502 "nvme_admin": false, 00:22:30.502 "nvme_io": false, 00:22:30.502 "nvme_io_md": false, 00:22:30.502 "write_zeroes": true, 00:22:30.502 "zcopy": false, 00:22:30.502 "get_zone_info": false, 00:22:30.502 "zone_management": false, 00:22:30.502 "zone_append": false, 00:22:30.502 "compare": false, 00:22:30.502 "compare_and_write": false, 00:22:30.502 "abort": false, 00:22:30.502 "seek_hole": true, 00:22:30.502 "seek_data": true, 00:22:30.502 "copy": false, 00:22:30.502 "nvme_iov_md": false 00:22:30.502 }, 00:22:30.502 "driver_specific": { 00:22:30.502 "lvol": { 00:22:30.502 "lvol_store_uuid": "ab26e56a-338d-4e2c-8a66-7c960f22dbf6", 00:22:30.502 "base_bdev": "nvme0n1", 00:22:30.502 "thin_provision": true, 00:22:30.502 "num_allocated_clusters": 0, 00:22:30.502 "snapshot": false, 00:22:30.502 "clone": false, 00:22:30.502 "esnap_clone": false 00:22:30.502 } 00:22:30.502 } 00:22:30.502 } 00:22:30.502 ]' 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:30.502 09:28:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:30.502 09:28:16 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:30.502 09:28:16 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:30.502 09:28:16 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:30.761 09:28:17 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:30.761 09:28:17 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:30.761 09:28:17 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.761 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:30.761 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:30.761 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:30.761 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:31.020 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:31.279 { 00:22:31.279 "name": "880e4dc2-dd9a-4b2c-b830-2f520e431968", 00:22:31.279 "aliases": [ 00:22:31.279 "lvs/nvme0n1p0" 00:22:31.279 ], 00:22:31.279 "product_name": "Logical Volume", 00:22:31.279 "block_size": 4096, 00:22:31.279 "num_blocks": 26476544, 00:22:31.279 "uuid": "880e4dc2-dd9a-4b2c-b830-2f520e431968", 00:22:31.279 "assigned_rate_limits": { 00:22:31.279 "rw_ios_per_sec": 0, 00:22:31.279 "rw_mbytes_per_sec": 0, 00:22:31.279 "r_mbytes_per_sec": 0, 00:22:31.279 "w_mbytes_per_sec": 0 00:22:31.279 }, 00:22:31.279 "claimed": false, 00:22:31.279 "zoned": false, 00:22:31.279 "supported_io_types": { 00:22:31.279 "read": true, 00:22:31.279 "write": true, 00:22:31.279 "unmap": true, 00:22:31.279 "flush": false, 00:22:31.279 "reset": true, 00:22:31.279 "nvme_admin": false, 00:22:31.279 "nvme_io": false, 00:22:31.279 "nvme_io_md": false, 00:22:31.279 "write_zeroes": true, 00:22:31.279 "zcopy": false, 00:22:31.279 "get_zone_info": false, 00:22:31.279 "zone_management": false, 00:22:31.279 "zone_append": false, 00:22:31.279 "compare": false, 00:22:31.279 "compare_and_write": false, 00:22:31.279 "abort": false, 00:22:31.279 "seek_hole": true, 00:22:31.279 "seek_data": true, 00:22:31.279 "copy": false, 00:22:31.279 "nvme_iov_md": false 00:22:31.279 }, 00:22:31.279 "driver_specific": { 00:22:31.279 "lvol": { 00:22:31.279 "lvol_store_uuid": "ab26e56a-338d-4e2c-8a66-7c960f22dbf6", 00:22:31.279 "base_bdev": "nvme0n1", 00:22:31.279 "thin_provision": true, 00:22:31.279 "num_allocated_clusters": 0, 00:22:31.279 "snapshot": false, 00:22:31.279 "clone": false, 00:22:31.279 "esnap_clone": false 00:22:31.279 } 00:22:31.279 } 00:22:31.279 } 00:22:31.279 ]' 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:31.279 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:31.279 09:28:17 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:31.279 09:28:17 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:31.536 09:28:17 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:31.536 09:28:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:31.536 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:31.536 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:31.536 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:22:31.536 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:22:31.536 09:28:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 880e4dc2-dd9a-4b2c-b830-2f520e431968 00:22:31.794 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:31.794 { 00:22:31.794 "name": "880e4dc2-dd9a-4b2c-b830-2f520e431968", 00:22:31.794 "aliases": [ 00:22:31.794 "lvs/nvme0n1p0" 00:22:31.794 ], 00:22:31.794 "product_name": "Logical Volume", 00:22:31.794 "block_size": 4096, 00:22:31.794 "num_blocks": 26476544, 00:22:31.794 "uuid": "880e4dc2-dd9a-4b2c-b830-2f520e431968", 00:22:31.794 "assigned_rate_limits": { 00:22:31.794 "rw_ios_per_sec": 0, 00:22:31.794 "rw_mbytes_per_sec": 0, 00:22:31.794 "r_mbytes_per_sec": 0, 00:22:31.794 "w_mbytes_per_sec": 0 00:22:31.794 }, 00:22:31.794 "claimed": false, 00:22:31.794 "zoned": false, 00:22:31.794 "supported_io_types": { 00:22:31.794 "read": true, 00:22:31.794 "write": true, 00:22:31.794 "unmap": true, 00:22:31.794 "flush": false, 00:22:31.794 "reset": true, 00:22:31.794 "nvme_admin": false, 00:22:31.794 "nvme_io": false, 00:22:31.794 "nvme_io_md": false, 00:22:31.794 "write_zeroes": true, 00:22:31.794 "zcopy": false, 00:22:31.794 "get_zone_info": false, 00:22:31.794 "zone_management": false, 00:22:31.794 "zone_append": false, 00:22:31.794 "compare": false, 00:22:31.794 "compare_and_write": false, 00:22:31.794 "abort": false, 00:22:31.794 "seek_hole": true, 00:22:31.794 "seek_data": true, 00:22:31.794 "copy": false, 00:22:31.794 "nvme_iov_md": false 00:22:31.794 }, 00:22:31.794 "driver_specific": { 00:22:31.794 "lvol": { 00:22:31.794 "lvol_store_uuid": "ab26e56a-338d-4e2c-8a66-7c960f22dbf6", 00:22:31.794 "base_bdev": "nvme0n1", 00:22:31.794 "thin_provision": true, 00:22:31.794 "num_allocated_clusters": 0, 00:22:31.794 "snapshot": false, 00:22:31.794 "clone": false, 00:22:31.794 "esnap_clone": false 00:22:31.794 } 00:22:31.795 } 00:22:31.795 } 00:22:31.795 ]' 00:22:31.795 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:31.795 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:22:31.795 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:31.795 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:31.795 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:31.795 09:28:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 880e4dc2-dd9a-4b2c-b830-2f520e431968 --l2p_dram_limit 10' 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:31.795 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:31.795 09:28:18 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 880e4dc2-dd9a-4b2c-b830-2f520e431968 --l2p_dram_limit 10 -c nvc0n1p0 00:22:32.053 [2024-07-12 09:28:18.395680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.395751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:32.053 [2024-07-12 09:28:18.395774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:32.053 [2024-07-12 09:28:18.395788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.395869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.395906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.053 [2024-07-12 09:28:18.395919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:32.053 [2024-07-12 09:28:18.395933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.395963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:32.053 [2024-07-12 09:28:18.396958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:32.053 [2024-07-12 09:28:18.397001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.397023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.053 [2024-07-12 09:28:18.397036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:22:32.053 [2024-07-12 09:28:18.397049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.397216] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fe09ee24-544c-46a4-a924-452dd5e6cb29 00:22:32.053 [2024-07-12 09:28:18.398235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.398291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:32.053 [2024-07-12 09:28:18.398315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:32.053 [2024-07-12 09:28:18.398327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.402860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.402911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.053 [2024-07-12 09:28:18.402935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:22:32.053 [2024-07-12 09:28:18.402946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.403075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.403095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.053 [2024-07-12 09:28:18.403111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:32.053 [2024-07-12 09:28:18.403123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.403233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.053 [2024-07-12 09:28:18.403254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:32.053 [2024-07-12 09:28:18.403269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:32.053 [2024-07-12 09:28:18.403283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.053 [2024-07-12 09:28:18.403320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:32.312 [2024-07-12 09:28:18.407935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.312 [2024-07-12 09:28:18.407979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.312 [2024-07-12 09:28:18.408011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:22:32.312 [2024-07-12 09:28:18.408026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.312 [2024-07-12 09:28:18.408073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.312 [2024-07-12 09:28:18.408093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:32.312 [2024-07-12 09:28:18.408106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:32.312 [2024-07-12 09:28:18.408119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.312 [2024-07-12 09:28:18.408167] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:32.312 [2024-07-12 09:28:18.408359] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:32.312 [2024-07-12 09:28:18.408381] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:32.312 [2024-07-12 09:28:18.408402] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:32.312 [2024-07-12 09:28:18.408417] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:32.312 [2024-07-12 09:28:18.408433] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:32.312 [2024-07-12 09:28:18.408446] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:32.312 [2024-07-12 09:28:18.408459] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:32.312 [2024-07-12 09:28:18.408472] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:32.312 [2024-07-12 09:28:18.408493] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:32.312 [2024-07-12 09:28:18.408505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.312 [2024-07-12 09:28:18.408518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:32.312 [2024-07-12 09:28:18.408531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:22:32.312 [2024-07-12 09:28:18.408544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.312 [2024-07-12 09:28:18.408638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.312 [2024-07-12 09:28:18.408656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:32.312 [2024-07-12 09:28:18.408669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:32.312 [2024-07-12 09:28:18.408682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.312 [2024-07-12 09:28:18.408808] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:32.312 [2024-07-12 09:28:18.408833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:32.312 [2024-07-12 09:28:18.408858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.312 [2024-07-12 09:28:18.408873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.408885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:32.312 [2024-07-12 09:28:18.408898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.408909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:32.312 [2024-07-12 09:28:18.408922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:32.312 [2024-07-12 09:28:18.408933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:32.312 [2024-07-12 09:28:18.408945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.312 [2024-07-12 09:28:18.408955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:32.312 [2024-07-12 09:28:18.408968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:32.312 [2024-07-12 09:28:18.408978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:32.312 [2024-07-12 09:28:18.408993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:32.312 [2024-07-12 09:28:18.409004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:32.312 [2024-07-12 09:28:18.409016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:32.312 [2024-07-12 09:28:18.409041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:32.312 [2024-07-12 09:28:18.409076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:32.312 [2024-07-12 09:28:18.409113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:32.312 [2024-07-12 09:28:18.409146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:32.312 [2024-07-12 09:28:18.409181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:32.312 [2024-07-12 09:28:18.409233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.312 [2024-07-12 09:28:18.409258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:32.312 [2024-07-12 09:28:18.409271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:32.312 [2024-07-12 09:28:18.409281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:32.312 [2024-07-12 09:28:18.409293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:32.312 [2024-07-12 09:28:18.409304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:32.312 [2024-07-12 09:28:18.409318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:32.312 [2024-07-12 09:28:18.409341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:32.312 [2024-07-12 09:28:18.409352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409363] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:32.312 [2024-07-12 09:28:18.409375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:32.312 [2024-07-12 09:28:18.409387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:32.312 [2024-07-12 09:28:18.409412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:32.312 [2024-07-12 09:28:18.409423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:32.312 [2024-07-12 09:28:18.409437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:32.312 [2024-07-12 09:28:18.409448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:32.312 [2024-07-12 09:28:18.409584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:32.312 [2024-07-12 09:28:18.409605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:32.312 [2024-07-12 09:28:18.409623] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:32.312 [2024-07-12 09:28:18.409638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.312 [2024-07-12 09:28:18.409655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:32.312 [2024-07-12 09:28:18.409667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:32.312 [2024-07-12 09:28:18.409681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:32.312 [2024-07-12 09:28:18.409693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:32.312 [2024-07-12 09:28:18.409706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:32.312 [2024-07-12 09:28:18.409718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:32.312 [2024-07-12 09:28:18.409731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:32.313 [2024-07-12 09:28:18.409743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:32.313 [2024-07-12 09:28:18.409758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:32.313 [2024-07-12 09:28:18.409770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:32.313 [2024-07-12 09:28:18.409785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:32.313 [2024-07-12 09:28:18.409797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:32.313 [2024-07-12 09:28:18.409811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:32.313 [2024-07-12 09:28:18.409823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:32.313 [2024-07-12 09:28:18.409836] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:32.313 [2024-07-12 09:28:18.409849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.313 [2024-07-12 09:28:18.409864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:32.313 [2024-07-12 09:28:18.409875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:32.313 [2024-07-12 09:28:18.409889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:32.313 [2024-07-12 09:28:18.409901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:32.313 [2024-07-12 09:28:18.409915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.313 [2024-07-12 09:28:18.409927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:32.313 [2024-07-12 09:28:18.409941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:22:32.313 [2024-07-12 09:28:18.409953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.313 [2024-07-12 09:28:18.410012] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:32.313 [2024-07-12 09:28:18.410030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:34.211 [2024-07-12 09:28:20.515130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.211 [2024-07-12 09:28:20.515210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:34.211 [2024-07-12 09:28:20.515237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2105.124 ms 00:22:34.211 [2024-07-12 09:28:20.515250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.211 [2024-07-12 09:28:20.547780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.211 [2024-07-12 09:28:20.547845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.211 [2024-07-12 09:28:20.547869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.197 ms 00:22:34.211 [2024-07-12 09:28:20.547882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.211 [2024-07-12 09:28:20.548086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.211 [2024-07-12 09:28:20.548108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:34.211 [2024-07-12 09:28:20.548124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:34.211 [2024-07-12 09:28:20.548138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.586955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.587014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.470 [2024-07-12 09:28:20.587037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.724 ms 00:22:34.470 [2024-07-12 09:28:20.587049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.587122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.587147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.470 [2024-07-12 09:28:20.587162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:34.470 [2024-07-12 09:28:20.587174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.587595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.587633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.470 [2024-07-12 09:28:20.587650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:22:34.470 [2024-07-12 09:28:20.587662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.587812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.587830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.470 [2024-07-12 09:28:20.587849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:22:34.470 [2024-07-12 09:28:20.587860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.605286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.605337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.470 [2024-07-12 09:28:20.605358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.395 ms 00:22:34.470 [2024-07-12 09:28:20.605370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.619362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:34.470 [2024-07-12 09:28:20.622215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.622257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:34.470 [2024-07-12 09:28:20.622275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.730 ms 00:22:34.470 [2024-07-12 09:28:20.622289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.692662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.692759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:34.470 [2024-07-12 09:28:20.692783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.332 ms 00:22:34.470 [2024-07-12 09:28:20.692799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.693065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.693093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:34.470 [2024-07-12 09:28:20.693108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:22:34.470 [2024-07-12 09:28:20.693124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.726086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.726138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:34.470 [2024-07-12 09:28:20.726158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.893 ms 00:22:34.470 [2024-07-12 09:28:20.726172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.758425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.758475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:34.470 [2024-07-12 09:28:20.758495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.182 ms 00:22:34.470 [2024-07-12 09:28:20.758508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.470 [2024-07-12 09:28:20.759299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.470 [2024-07-12 09:28:20.759338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:34.470 [2024-07-12 09:28:20.759354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:22:34.470 [2024-07-12 09:28:20.759381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.851239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.728 [2024-07-12 09:28:20.851311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:34.728 [2024-07-12 09:28:20.851333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.778 ms 00:22:34.728 [2024-07-12 09:28:20.851351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.883955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.728 [2024-07-12 09:28:20.884014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:34.728 [2024-07-12 09:28:20.884050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.545 ms 00:22:34.728 [2024-07-12 09:28:20.884064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.916358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.728 [2024-07-12 09:28:20.916406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:34.728 [2024-07-12 09:28:20.916424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.240 ms 00:22:34.728 [2024-07-12 09:28:20.916438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.948928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.728 [2024-07-12 09:28:20.948980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:34.728 [2024-07-12 09:28:20.949000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.441 ms 00:22:34.728 [2024-07-12 09:28:20.949014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.949082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.728 [2024-07-12 09:28:20.949113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:34.728 [2024-07-12 09:28:20.949127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:34.728 [2024-07-12 09:28:20.949143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.949287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.728 [2024-07-12 09:28:20.949315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:34.728 [2024-07-12 09:28:20.949331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:34.728 [2024-07-12 09:28:20.949345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.728 [2024-07-12 09:28:20.950393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2554.209 ms, result 0 00:22:34.728 { 00:22:34.728 "name": "ftl0", 00:22:34.728 "uuid": "fe09ee24-544c-46a4-a924-452dd5e6cb29" 00:22:34.728 } 00:22:34.728 09:28:20 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:34.728 09:28:20 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:34.986 09:28:21 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:34.986 09:28:21 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:35.245 [2024-07-12 09:28:21.538095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.245 [2024-07-12 09:28:21.538224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:35.245 [2024-07-12 09:28:21.538251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:35.245 [2024-07-12 09:28:21.538265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.245 [2024-07-12 09:28:21.538306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:35.245 [2024-07-12 09:28:21.541621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.245 [2024-07-12 09:28:21.541663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:35.245 [2024-07-12 09:28:21.541680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:22:35.245 [2024-07-12 09:28:21.541694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.245 [2024-07-12 09:28:21.542025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.245 [2024-07-12 09:28:21.542056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:35.245 [2024-07-12 09:28:21.542083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:22:35.245 [2024-07-12 09:28:21.542097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.245 [2024-07-12 09:28:21.545425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.245 [2024-07-12 09:28:21.545460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:35.245 [2024-07-12 09:28:21.545475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:22:35.245 [2024-07-12 09:28:21.545489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.245 [2024-07-12 09:28:21.552259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.245 [2024-07-12 09:28:21.552296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:35.245 [2024-07-12 09:28:21.552314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.744 ms 00:22:35.245 [2024-07-12 09:28:21.552328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.245 [2024-07-12 09:28:21.583761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.245 [2024-07-12 09:28:21.583827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:35.245 [2024-07-12 09:28:21.583848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.344 ms 00:22:35.245 [2024-07-12 09:28:21.583862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.535 [2024-07-12 09:28:21.603023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.535 [2024-07-12 09:28:21.603089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:35.535 [2024-07-12 09:28:21.603110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.101 ms 00:22:35.535 [2024-07-12 09:28:21.603125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.535 [2024-07-12 09:28:21.603371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.535 [2024-07-12 09:28:21.603402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:35.535 [2024-07-12 09:28:21.603417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:22:35.535 [2024-07-12 09:28:21.603430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.535 [2024-07-12 09:28:21.635136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.535 [2024-07-12 09:28:21.635210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:35.535 [2024-07-12 09:28:21.635232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.664 ms 00:22:35.536 [2024-07-12 09:28:21.635246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.666316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.536 [2024-07-12 09:28:21.666368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:35.536 [2024-07-12 09:28:21.666387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.012 ms 00:22:35.536 [2024-07-12 09:28:21.666400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.697156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.536 [2024-07-12 09:28:21.697226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:35.536 [2024-07-12 09:28:21.697246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.704 ms 00:22:35.536 [2024-07-12 09:28:21.697260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.728403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.536 [2024-07-12 09:28:21.728477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:35.536 [2024-07-12 09:28:21.728497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.000 ms 00:22:35.536 [2024-07-12 09:28:21.728512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.728565] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:35.536 [2024-07-12 09:28:21.728595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.728994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:35.536 [2024-07-12 09:28:21.729988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:35.536 [2024-07-12 09:28:21.730002] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe09ee24-544c-46a4-a924-452dd5e6cb29 00:22:35.536 [2024-07-12 09:28:21.730016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:35.536 [2024-07-12 09:28:21.730027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:35.536 [2024-07-12 09:28:21.730042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:35.536 [2024-07-12 09:28:21.730053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:35.536 [2024-07-12 09:28:21.730066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:35.536 [2024-07-12 09:28:21.730078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:35.536 [2024-07-12 09:28:21.730092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:35.536 [2024-07-12 09:28:21.730102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:35.536 [2024-07-12 09:28:21.730114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:35.536 [2024-07-12 09:28:21.730126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.536 [2024-07-12 09:28:21.730139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:35.536 [2024-07-12 09:28:21.730152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:22:35.536 [2024-07-12 09:28:21.730165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.746805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.536 [2024-07-12 09:28:21.746856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:35.536 [2024-07-12 09:28:21.746874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.559 ms 00:22:35.536 [2024-07-12 09:28:21.746887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.747358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.536 [2024-07-12 09:28:21.747393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:35.536 [2024-07-12 09:28:21.747409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:22:35.536 [2024-07-12 09:28:21.747426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.799260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.536 [2024-07-12 09:28:21.799326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:35.536 [2024-07-12 09:28:21.799347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.536 [2024-07-12 09:28:21.799361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.799457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.536 [2024-07-12 09:28:21.799479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:35.536 [2024-07-12 09:28:21.799492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.536 [2024-07-12 09:28:21.799509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.799635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.536 [2024-07-12 09:28:21.799661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:35.536 [2024-07-12 09:28:21.799675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.536 [2024-07-12 09:28:21.799688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.536 [2024-07-12 09:28:21.799715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.536 [2024-07-12 09:28:21.799736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:35.536 [2024-07-12 09:28:21.799749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.536 [2024-07-12 09:28:21.799762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.898775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.898843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:35.795 [2024-07-12 09:28:21.898863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.898878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.983885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.983961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:35.795 [2024-07-12 09:28:21.983982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.984140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.795 [2024-07-12 09:28:21.984154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.984289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.795 [2024-07-12 09:28:21.984303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.984478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.795 [2024-07-12 09:28:21.984491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.984588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:35.795 [2024-07-12 09:28:21.984601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.984685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.795 [2024-07-12 09:28:21.984698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.795 [2024-07-12 09:28:21.984791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.795 [2024-07-12 09:28:21.984804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.795 [2024-07-12 09:28:21.984817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.795 [2024-07-12 09:28:21.984977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.848 ms, result 0 00:22:35.795 true 00:22:35.795 09:28:22 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81605 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81605 ']' 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81605 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81605 00:22:35.795 killing process with pid 81605 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81605' 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 81605 00:22:35.795 09:28:22 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 81605 00:22:41.081 09:28:26 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:46.342 262144+0 records in 00:22:46.342 262144+0 records out 00:22:46.342 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.95036 s, 217 MB/s 00:22:46.342 09:28:31 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:48.241 09:28:34 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:48.241 [2024-07-12 09:28:34.205318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:48.241 [2024-07-12 09:28:34.205483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81842 ] 00:22:48.241 [2024-07-12 09:28:34.378526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.499 [2024-07-12 09:28:34.618042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.758 [2024-07-12 09:28:34.929571] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.758 [2024-07-12 09:28:34.929658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.758 [2024-07-12 09:28:35.088814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.758 [2024-07-12 09:28:35.088884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:48.758 [2024-07-12 09:28:35.088906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:48.758 [2024-07-12 09:28:35.088918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.758 [2024-07-12 09:28:35.088994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.758 [2024-07-12 09:28:35.089016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.758 [2024-07-12 09:28:35.089029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:48.758 [2024-07-12 09:28:35.089044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.758 [2024-07-12 09:28:35.089076] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:48.758 [2024-07-12 09:28:35.090008] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:48.758 [2024-07-12 09:28:35.090050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.758 [2024-07-12 09:28:35.090069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.758 [2024-07-12 09:28:35.090082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:22:48.758 [2024-07-12 09:28:35.090092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.758 [2024-07-12 09:28:35.091257] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:48.758 [2024-07-12 09:28:35.107573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.758 [2024-07-12 09:28:35.107618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:48.758 [2024-07-12 09:28:35.107636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.318 ms 00:22:48.758 [2024-07-12 09:28:35.107648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.758 [2024-07-12 09:28:35.107729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.758 [2024-07-12 09:28:35.107750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:48.758 [2024-07-12 09:28:35.107767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:48.758 [2024-07-12 09:28:35.107778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.068 [2024-07-12 09:28:35.112308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.068 [2024-07-12 09:28:35.112358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.068 [2024-07-12 09:28:35.112375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.434 ms 00:22:49.068 [2024-07-12 09:28:35.112386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.068 [2024-07-12 09:28:35.112490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.068 [2024-07-12 09:28:35.112514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.068 [2024-07-12 09:28:35.112527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:49.069 [2024-07-12 09:28:35.112538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.069 [2024-07-12 09:28:35.112608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.069 [2024-07-12 09:28:35.112627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:49.069 [2024-07-12 09:28:35.112639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:49.069 [2024-07-12 09:28:35.112649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.069 [2024-07-12 09:28:35.112686] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:49.069 [2024-07-12 09:28:35.116977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.069 [2024-07-12 09:28:35.117016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.069 [2024-07-12 09:28:35.117031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:22:49.069 [2024-07-12 09:28:35.117042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.069 [2024-07-12 09:28:35.117089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.069 [2024-07-12 09:28:35.117105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:49.069 [2024-07-12 09:28:35.117118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:49.069 [2024-07-12 09:28:35.117129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.069 [2024-07-12 09:28:35.117176] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:49.069 [2024-07-12 09:28:35.117226] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:49.069 [2024-07-12 09:28:35.117270] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:49.069 [2024-07-12 09:28:35.117293] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:49.069 [2024-07-12 09:28:35.117398] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:49.069 [2024-07-12 09:28:35.117414] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:49.069 [2024-07-12 09:28:35.117429] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:49.069 [2024-07-12 09:28:35.117444] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:49.069 [2024-07-12 09:28:35.117457] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:49.069 [2024-07-12 09:28:35.117469] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:49.069 [2024-07-12 09:28:35.117479] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:49.069 [2024-07-12 09:28:35.117489] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:49.069 [2024-07-12 09:28:35.117499] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:49.069 [2024-07-12 09:28:35.117510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.069 [2024-07-12 09:28:35.117526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:49.069 [2024-07-12 09:28:35.117537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:22:49.069 [2024-07-12 09:28:35.117547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.069 [2024-07-12 09:28:35.117636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.069 [2024-07-12 09:28:35.117652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:49.069 [2024-07-12 09:28:35.117664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:49.069 [2024-07-12 09:28:35.117674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.069 [2024-07-12 09:28:35.117810] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:49.069 [2024-07-12 09:28:35.117837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:49.069 [2024-07-12 09:28:35.117856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.069 [2024-07-12 09:28:35.117868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.117879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:49.069 [2024-07-12 09:28:35.117889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.117900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:49.069 [2024-07-12 09:28:35.117910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:49.069 [2024-07-12 09:28:35.117920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:49.069 [2024-07-12 09:28:35.117930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.069 [2024-07-12 09:28:35.117940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:49.069 [2024-07-12 09:28:35.117950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:49.069 [2024-07-12 09:28:35.117960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.069 [2024-07-12 09:28:35.117970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:49.069 [2024-07-12 09:28:35.117981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:49.069 [2024-07-12 09:28:35.117991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:49.069 [2024-07-12 09:28:35.118011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:49.069 [2024-07-12 09:28:35.118053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:49.069 [2024-07-12 09:28:35.118083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:49.069 [2024-07-12 09:28:35.118113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:49.069 [2024-07-12 09:28:35.118142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:49.069 [2024-07-12 09:28:35.118172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.069 [2024-07-12 09:28:35.118209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:49.069 [2024-07-12 09:28:35.118219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:49.069 [2024-07-12 09:28:35.118229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.069 [2024-07-12 09:28:35.118240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:49.069 [2024-07-12 09:28:35.118251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:49.069 [2024-07-12 09:28:35.118261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:49.069 [2024-07-12 09:28:35.118280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:49.069 [2024-07-12 09:28:35.118290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118300] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:49.069 [2024-07-12 09:28:35.118313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:49.069 [2024-07-12 09:28:35.118324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.069 [2024-07-12 09:28:35.118346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:49.069 [2024-07-12 09:28:35.118356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:49.069 [2024-07-12 09:28:35.118366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:49.069 [2024-07-12 09:28:35.118376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:49.069 [2024-07-12 09:28:35.118386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:49.069 [2024-07-12 09:28:35.118396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:49.069 [2024-07-12 09:28:35.118408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:49.069 [2024-07-12 09:28:35.118422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.069 [2024-07-12 09:28:35.118434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:49.069 [2024-07-12 09:28:35.118445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:49.069 [2024-07-12 09:28:35.118456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:49.069 [2024-07-12 09:28:35.118467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:49.069 [2024-07-12 09:28:35.118478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:49.069 [2024-07-12 09:28:35.118489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:49.070 [2024-07-12 09:28:35.118499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:49.070 [2024-07-12 09:28:35.118510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:49.070 [2024-07-12 09:28:35.118520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:49.070 [2024-07-12 09:28:35.118531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:49.070 [2024-07-12 09:28:35.118542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:49.070 [2024-07-12 09:28:35.118552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:49.070 [2024-07-12 09:28:35.118563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:49.070 [2024-07-12 09:28:35.118574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:49.070 [2024-07-12 09:28:35.118585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:49.070 [2024-07-12 09:28:35.118597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.070 [2024-07-12 09:28:35.118608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:49.070 [2024-07-12 09:28:35.118619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:49.070 [2024-07-12 09:28:35.118630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:49.070 [2024-07-12 09:28:35.118641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:49.070 [2024-07-12 09:28:35.118653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.118671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:49.070 [2024-07-12 09:28:35.118682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:22:49.070 [2024-07-12 09:28:35.118693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.166101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.166220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.070 [2024-07-12 09:28:35.166244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.342 ms 00:22:49.070 [2024-07-12 09:28:35.166256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.166408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.166440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:49.070 [2024-07-12 09:28:35.166470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:49.070 [2024-07-12 09:28:35.166481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.206406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.206463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.070 [2024-07-12 09:28:35.206514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.830 ms 00:22:49.070 [2024-07-12 09:28:35.206525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.206644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.206662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.070 [2024-07-12 09:28:35.206675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.070 [2024-07-12 09:28:35.206686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.207071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.207091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.070 [2024-07-12 09:28:35.207104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:22:49.070 [2024-07-12 09:28:35.207114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.207292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.207314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.070 [2024-07-12 09:28:35.207327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:22:49.070 [2024-07-12 09:28:35.207337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.223705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.223765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.070 [2024-07-12 09:28:35.223782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.337 ms 00:22:49.070 [2024-07-12 09:28:35.223793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.240556] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:49.070 [2024-07-12 09:28:35.240608] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:49.070 [2024-07-12 09:28:35.240632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.240644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:49.070 [2024-07-12 09:28:35.240658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.685 ms 00:22:49.070 [2024-07-12 09:28:35.240669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.270910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.271098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:49.070 [2024-07-12 09:28:35.271238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.187 ms 00:22:49.070 [2024-07-12 09:28:35.271293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.288075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.288291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:49.070 [2024-07-12 09:28:35.288412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.556 ms 00:22:49.070 [2024-07-12 09:28:35.288435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.304567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.304624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:49.070 [2024-07-12 09:28:35.304641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.084 ms 00:22:49.070 [2024-07-12 09:28:35.304651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.305472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.305511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:49.070 [2024-07-12 09:28:35.305527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:22:49.070 [2024-07-12 09:28:35.305538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.379630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.379702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:49.070 [2024-07-12 09:28:35.379723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.066 ms 00:22:49.070 [2024-07-12 09:28:35.379734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.392944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:49.070 [2024-07-12 09:28:35.395556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.395597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:49.070 [2024-07-12 09:28:35.395615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.739 ms 00:22:49.070 [2024-07-12 09:28:35.395626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.395738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.395758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:49.070 [2024-07-12 09:28:35.395771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:49.070 [2024-07-12 09:28:35.395782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.395870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.395890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:49.070 [2024-07-12 09:28:35.395908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:49.070 [2024-07-12 09:28:35.395919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.395952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.395969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:49.070 [2024-07-12 09:28:35.395980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:49.070 [2024-07-12 09:28:35.395991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.070 [2024-07-12 09:28:35.396031] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:49.070 [2024-07-12 09:28:35.396050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.070 [2024-07-12 09:28:35.396062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:49.070 [2024-07-12 09:28:35.396073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:49.070 [2024-07-12 09:28:35.396088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.328 [2024-07-12 09:28:35.428017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.328 [2024-07-12 09:28:35.428078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:49.328 [2024-07-12 09:28:35.428096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.903 ms 00:22:49.328 [2024-07-12 09:28:35.428107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.328 [2024-07-12 09:28:35.428257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.328 [2024-07-12 09:28:35.428279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:49.328 [2024-07-12 09:28:35.428301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:49.328 [2024-07-12 09:28:35.428312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.328 [2024-07-12 09:28:35.429435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.097 ms, result 0 00:23:26.331  Copying: 28/1024 [MB] (28 MBps) Copying: 57/1024 [MB] (29 MBps) Copying: 85/1024 [MB] (27 MBps) Copying: 113/1024 [MB] (27 MBps) Copying: 141/1024 [MB] (27 MBps) Copying: 170/1024 [MB] (29 MBps) Copying: 199/1024 [MB] (28 MBps) Copying: 228/1024 [MB] (28 MBps) Copying: 256/1024 [MB] (28 MBps) Copying: 286/1024 [MB] (29 MBps) Copying: 315/1024 [MB] (28 MBps) Copying: 342/1024 [MB] (27 MBps) Copying: 371/1024 [MB] (28 MBps) Copying: 399/1024 [MB] (28 MBps) Copying: 427/1024 [MB] (28 MBps) Copying: 455/1024 [MB] (27 MBps) Copying: 483/1024 [MB] (28 MBps) Copying: 511/1024 [MB] (27 MBps) Copying: 539/1024 [MB] (28 MBps) Copying: 568/1024 [MB] (28 MBps) Copying: 595/1024 [MB] (27 MBps) Copying: 621/1024 [MB] (25 MBps) Copying: 647/1024 [MB] (25 MBps) Copying: 672/1024 [MB] (24 MBps) Copying: 698/1024 [MB] (26 MBps) Copying: 722/1024 [MB] (24 MBps) Copying: 749/1024 [MB] (26 MBps) Copying: 775/1024 [MB] (26 MBps) Copying: 803/1024 [MB] (27 MBps) Copying: 830/1024 [MB] (27 MBps) Copying: 860/1024 [MB] (29 MBps) Copying: 891/1024 [MB] (30 MBps) Copying: 916/1024 [MB] (25 MBps) Copying: 942/1024 [MB] (25 MBps) Copying: 969/1024 [MB] (26 MBps) Copying: 995/1024 [MB] (26 MBps) Copying: 1021/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-12 09:29:12.562498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.562580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:26.331 [2024-07-12 09:29:12.562603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:26.331 [2024-07-12 09:29:12.562615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.562645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:26.331 [2024-07-12 09:29:12.566268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.566324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:26.331 [2024-07-12 09:29:12.566340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.600 ms 00:23:26.331 [2024-07-12 09:29:12.566351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.567737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.567780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:26.331 [2024-07-12 09:29:12.567819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:23:26.331 [2024-07-12 09:29:12.567831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.582896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.582936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:26.331 [2024-07-12 09:29:12.582967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.043 ms 00:23:26.331 [2024-07-12 09:29:12.582993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.589176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.589233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:26.331 [2024-07-12 09:29:12.589271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:23:26.331 [2024-07-12 09:29:12.589281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.618019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.618059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:26.331 [2024-07-12 09:29:12.618090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.660 ms 00:23:26.331 [2024-07-12 09:29:12.618101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.634896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.634967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:26.331 [2024-07-12 09:29:12.634999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.753 ms 00:23:26.331 [2024-07-12 09:29:12.635009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.635152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.635173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:26.331 [2024-07-12 09:29:12.635202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:26.331 [2024-07-12 09:29:12.635232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.331 [2024-07-12 09:29:12.666952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.331 [2024-07-12 09:29:12.667043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:26.331 [2024-07-12 09:29:12.667074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.693 ms 00:23:26.331 [2024-07-12 09:29:12.667084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.592 [2024-07-12 09:29:12.697799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.592 [2024-07-12 09:29:12.697840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:26.592 [2024-07-12 09:29:12.697871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.668 ms 00:23:26.592 [2024-07-12 09:29:12.697880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.592 [2024-07-12 09:29:12.726282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.592 [2024-07-12 09:29:12.726490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:26.592 [2024-07-12 09:29:12.726644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.360 ms 00:23:26.592 [2024-07-12 09:29:12.726680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.592 [2024-07-12 09:29:12.754530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.592 [2024-07-12 09:29:12.754572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:26.592 [2024-07-12 09:29:12.754617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.753 ms 00:23:26.592 [2024-07-12 09:29:12.754627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.592 [2024-07-12 09:29:12.754668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:26.592 [2024-07-12 09:29:12.754691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.754946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:26.592 [2024-07-12 09:29:12.755375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.755990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.756003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.756014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:26.593 [2024-07-12 09:29:12.756035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:26.593 [2024-07-12 09:29:12.756046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe09ee24-544c-46a4-a924-452dd5e6cb29 00:23:26.593 [2024-07-12 09:29:12.756057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:26.593 [2024-07-12 09:29:12.756067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:26.593 [2024-07-12 09:29:12.756077] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:26.593 [2024-07-12 09:29:12.756096] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:26.593 [2024-07-12 09:29:12.756106] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:26.593 [2024-07-12 09:29:12.756116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:26.593 [2024-07-12 09:29:12.756127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:26.593 [2024-07-12 09:29:12.756136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:26.593 [2024-07-12 09:29:12.756146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:26.593 [2024-07-12 09:29:12.756158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.593 [2024-07-12 09:29:12.756169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:26.593 [2024-07-12 09:29:12.756180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:23:26.593 [2024-07-12 09:29:12.756191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.593 [2024-07-12 09:29:12.771803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.593 [2024-07-12 09:29:12.771862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:26.593 [2024-07-12 09:29:12.771893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.557 ms 00:23:26.593 [2024-07-12 09:29:12.771929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.593 [2024-07-12 09:29:12.772432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.593 [2024-07-12 09:29:12.772458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:26.593 [2024-07-12 09:29:12.772472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:23:26.593 [2024-07-12 09:29:12.772482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.593 [2024-07-12 09:29:12.808085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.594 [2024-07-12 09:29:12.808151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.594 [2024-07-12 09:29:12.808181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.594 [2024-07-12 09:29:12.808191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.594 [2024-07-12 09:29:12.808284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.594 [2024-07-12 09:29:12.808300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.594 [2024-07-12 09:29:12.808326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.594 [2024-07-12 09:29:12.808337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.594 [2024-07-12 09:29:12.808424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.594 [2024-07-12 09:29:12.808448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.594 [2024-07-12 09:29:12.808459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.594 [2024-07-12 09:29:12.808470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.594 [2024-07-12 09:29:12.808491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.594 [2024-07-12 09:29:12.808504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.594 [2024-07-12 09:29:12.808515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.594 [2024-07-12 09:29:12.808524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.594 [2024-07-12 09:29:12.905114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.594 [2024-07-12 09:29:12.905215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.594 [2024-07-12 09:29:12.905236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.594 [2024-07-12 09:29:12.905247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.989775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.989839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.857 [2024-07-12 09:29:12.989873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.989883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.989960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.989976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.857 [2024-07-12 09:29:12.989988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.990006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.990049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.990064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.857 [2024-07-12 09:29:12.990074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.990084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.990195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.990273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.857 [2024-07-12 09:29:12.990286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.990303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.990359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.990378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:26.857 [2024-07-12 09:29:12.990390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.990401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.990444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.990460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.857 [2024-07-12 09:29:12.990471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.990482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.990539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.857 [2024-07-12 09:29:12.990555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.857 [2024-07-12 09:29:12.990567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.857 [2024-07-12 09:29:12.990577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.857 [2024-07-12 09:29:12.990718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 428.184 ms, result 0 00:23:27.793 00:23:27.793 00:23:27.793 09:29:14 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:28.051 [2024-07-12 09:29:14.201500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:28.051 [2024-07-12 09:29:14.201666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82239 ] 00:23:28.051 [2024-07-12 09:29:14.360704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.310 [2024-07-12 09:29:14.539773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.569 [2024-07-12 09:29:14.846079] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.569 [2024-07-12 09:29:14.846166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.829 [2024-07-12 09:29:15.006714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.006781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:28.829 [2024-07-12 09:29:15.006818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:28.829 [2024-07-12 09:29:15.006840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.006928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.006949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.829 [2024-07-12 09:29:15.006962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:28.829 [2024-07-12 09:29:15.006976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.007007] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:28.829 [2024-07-12 09:29:15.007996] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:28.829 [2024-07-12 09:29:15.008042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.008062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.829 [2024-07-12 09:29:15.008075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:23:28.829 [2024-07-12 09:29:15.008087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.009234] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:28.829 [2024-07-12 09:29:15.025084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.025146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:28.829 [2024-07-12 09:29:15.025180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.835 ms 00:23:28.829 [2024-07-12 09:29:15.025192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.025318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.025339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:28.829 [2024-07-12 09:29:15.025356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:28.829 [2024-07-12 09:29:15.025367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.029845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.029890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.829 [2024-07-12 09:29:15.029921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:23:28.829 [2024-07-12 09:29:15.029931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.030025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.030047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.829 [2024-07-12 09:29:15.030059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:28.829 [2024-07-12 09:29:15.030069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.030129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.030147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:28.829 [2024-07-12 09:29:15.030159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:28.829 [2024-07-12 09:29:15.030170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.030222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.829 [2024-07-12 09:29:15.034441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.034477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.829 [2024-07-12 09:29:15.034508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:23:28.829 [2024-07-12 09:29:15.034519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.034564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.034580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:28.829 [2024-07-12 09:29:15.034592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:28.829 [2024-07-12 09:29:15.034603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.034646] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:28.829 [2024-07-12 09:29:15.034677] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:28.829 [2024-07-12 09:29:15.034720] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:28.829 [2024-07-12 09:29:15.034743] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:28.829 [2024-07-12 09:29:15.034846] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:28.829 [2024-07-12 09:29:15.034861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:28.829 [2024-07-12 09:29:15.034875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:28.829 [2024-07-12 09:29:15.034889] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:28.829 [2024-07-12 09:29:15.034902] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:28.829 [2024-07-12 09:29:15.034914] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:28.829 [2024-07-12 09:29:15.034924] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:28.829 [2024-07-12 09:29:15.034935] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:28.829 [2024-07-12 09:29:15.034945] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:28.829 [2024-07-12 09:29:15.034957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.034973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:28.829 [2024-07-12 09:29:15.034984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:23:28.829 [2024-07-12 09:29:15.034995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.035081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.829 [2024-07-12 09:29:15.035095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:28.829 [2024-07-12 09:29:15.035106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:28.829 [2024-07-12 09:29:15.035117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.829 [2024-07-12 09:29:15.035255] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:28.829 [2024-07-12 09:29:15.035275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:28.829 [2024-07-12 09:29:15.035294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.829 [2024-07-12 09:29:15.035305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.829 [2024-07-12 09:29:15.035317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:28.829 [2024-07-12 09:29:15.035327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:28.829 [2024-07-12 09:29:15.035338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:28.829 [2024-07-12 09:29:15.035349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:28.829 [2024-07-12 09:29:15.035359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:28.829 [2024-07-12 09:29:15.035370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.829 [2024-07-12 09:29:15.035381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:28.829 [2024-07-12 09:29:15.035391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:28.829 [2024-07-12 09:29:15.035401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.829 [2024-07-12 09:29:15.035411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:28.829 [2024-07-12 09:29:15.035423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:28.829 [2024-07-12 09:29:15.035434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.829 [2024-07-12 09:29:15.035454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:28.829 [2024-07-12 09:29:15.035467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:28.829 [2024-07-12 09:29:15.035477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.829 [2024-07-12 09:29:15.035487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:28.830 [2024-07-12 09:29:15.035511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.830 [2024-07-12 09:29:15.035532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:28.830 [2024-07-12 09:29:15.035543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.830 [2024-07-12 09:29:15.035563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:28.830 [2024-07-12 09:29:15.035573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.830 [2024-07-12 09:29:15.035594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:28.830 [2024-07-12 09:29:15.035604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.830 [2024-07-12 09:29:15.035625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:28.830 [2024-07-12 09:29:15.035635] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.830 [2024-07-12 09:29:15.035656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:28.830 [2024-07-12 09:29:15.035666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:28.830 [2024-07-12 09:29:15.035676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.830 [2024-07-12 09:29:15.035687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:28.830 [2024-07-12 09:29:15.035697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:28.830 [2024-07-12 09:29:15.035707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:28.830 [2024-07-12 09:29:15.035727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:28.830 [2024-07-12 09:29:15.035738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035748] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:28.830 [2024-07-12 09:29:15.035760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:28.830 [2024-07-12 09:29:15.035771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.830 [2024-07-12 09:29:15.035783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.830 [2024-07-12 09:29:15.035794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:28.830 [2024-07-12 09:29:15.035805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:28.830 [2024-07-12 09:29:15.035816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:28.830 [2024-07-12 09:29:15.035826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:28.830 [2024-07-12 09:29:15.035836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:28.830 [2024-07-12 09:29:15.035846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:28.830 [2024-07-12 09:29:15.035858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:28.830 [2024-07-12 09:29:15.035872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.035885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:28.830 [2024-07-12 09:29:15.035896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:28.830 [2024-07-12 09:29:15.035907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:28.830 [2024-07-12 09:29:15.035918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:28.830 [2024-07-12 09:29:15.035930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:28.830 [2024-07-12 09:29:15.035941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:28.830 [2024-07-12 09:29:15.035953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:28.830 [2024-07-12 09:29:15.035964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:28.830 [2024-07-12 09:29:15.035975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:28.830 [2024-07-12 09:29:15.035987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.035998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.036009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.036021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.036032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:28.830 [2024-07-12 09:29:15.036044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:28.830 [2024-07-12 09:29:15.036057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.036069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:28.830 [2024-07-12 09:29:15.036080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:28.830 [2024-07-12 09:29:15.036091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:28.830 [2024-07-12 09:29:15.036103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:28.830 [2024-07-12 09:29:15.036115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.036131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:28.830 [2024-07-12 09:29:15.036143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:23:28.830 [2024-07-12 09:29:15.036155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.075439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.075546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.830 [2024-07-12 09:29:15.075569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.180 ms 00:23:28.830 [2024-07-12 09:29:15.075587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.075711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.075729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:28.830 [2024-07-12 09:29:15.075742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:28.830 [2024-07-12 09:29:15.075753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.112795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.112855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.830 [2024-07-12 09:29:15.112891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.932 ms 00:23:28.830 [2024-07-12 09:29:15.112902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.112986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.113003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.830 [2024-07-12 09:29:15.113015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.830 [2024-07-12 09:29:15.113026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.113441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.113461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.830 [2024-07-12 09:29:15.113474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:23:28.830 [2024-07-12 09:29:15.113485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.113657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.113678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.830 [2024-07-12 09:29:15.113690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:23:28.830 [2024-07-12 09:29:15.113701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.130060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.130110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.830 [2024-07-12 09:29:15.130144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.331 ms 00:23:28.830 [2024-07-12 09:29:15.130155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.146026] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:28.830 [2024-07-12 09:29:15.146072] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:28.830 [2024-07-12 09:29:15.146106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.146119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:28.830 [2024-07-12 09:29:15.146132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.753 ms 00:23:28.830 [2024-07-12 09:29:15.146142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.830 [2024-07-12 09:29:15.174786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.830 [2024-07-12 09:29:15.174842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:28.830 [2024-07-12 09:29:15.174870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.580 ms 00:23:28.830 [2024-07-12 09:29:15.174882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.190699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.190743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:29.089 [2024-07-12 09:29:15.190776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.750 ms 00:23:29.089 [2024-07-12 09:29:15.190787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.206210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.206277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:29.089 [2024-07-12 09:29:15.206311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.377 ms 00:23:29.089 [2024-07-12 09:29:15.206323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.207160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.207211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:29.089 [2024-07-12 09:29:15.207229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:23:29.089 [2024-07-12 09:29:15.207241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.277231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.277332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:29.089 [2024-07-12 09:29:15.277368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.960 ms 00:23:29.089 [2024-07-12 09:29:15.277380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.289573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:29.089 [2024-07-12 09:29:15.292269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.292304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:29.089 [2024-07-12 09:29:15.292354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.799 ms 00:23:29.089 [2024-07-12 09:29:15.292366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.292481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.292501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:29.089 [2024-07-12 09:29:15.292514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:29.089 [2024-07-12 09:29:15.292526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.292638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.292658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:29.089 [2024-07-12 09:29:15.292670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:29.089 [2024-07-12 09:29:15.292682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.292714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.292730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:29.089 [2024-07-12 09:29:15.292757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:29.089 [2024-07-12 09:29:15.292768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.292807] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:29.089 [2024-07-12 09:29:15.292824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.292839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:29.089 [2024-07-12 09:29:15.292850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:29.089 [2024-07-12 09:29:15.292861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.322634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.322685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:29.089 [2024-07-12 09:29:15.322719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.737 ms 00:23:29.089 [2024-07-12 09:29:15.322730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.322820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.089 [2024-07-12 09:29:15.322839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:29.089 [2024-07-12 09:29:15.322852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:29.089 [2024-07-12 09:29:15.322862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.089 [2024-07-12 09:29:15.324352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.061 ms, result 0 00:24:08.546  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 104/1024 [MB] (25 MBps) Copying: 130/1024 [MB] (26 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 183/1024 [MB] (25 MBps) Copying: 209/1024 [MB] (26 MBps) Copying: 236/1024 [MB] (26 MBps) Copying: 262/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (26 MBps) Copying: 316/1024 [MB] (26 MBps) Copying: 342/1024 [MB] (26 MBps) Copying: 368/1024 [MB] (25 MBps) Copying: 395/1024 [MB] (26 MBps) Copying: 422/1024 [MB] (26 MBps) Copying: 448/1024 [MB] (26 MBps) Copying: 475/1024 [MB] (26 MBps) Copying: 501/1024 [MB] (26 MBps) Copying: 527/1024 [MB] (25 MBps) Copying: 553/1024 [MB] (25 MBps) Copying: 579/1024 [MB] (26 MBps) Copying: 606/1024 [MB] (26 MBps) Copying: 632/1024 [MB] (26 MBps) Copying: 658/1024 [MB] (25 MBps) Copying: 684/1024 [MB] (26 MBps) Copying: 711/1024 [MB] (26 MBps) Copying: 738/1024 [MB] (27 MBps) Copying: 765/1024 [MB] (26 MBps) Copying: 790/1024 [MB] (25 MBps) Copying: 815/1024 [MB] (24 MBps) Copying: 840/1024 [MB] (25 MBps) Copying: 865/1024 [MB] (25 MBps) Copying: 889/1024 [MB] (23 MBps) Copying: 915/1024 [MB] (25 MBps) Copying: 942/1024 [MB] (26 MBps) Copying: 968/1024 [MB] (26 MBps) Copying: 994/1024 [MB] (26 MBps) Copying: 1020/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-12 09:29:54.769364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.769722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:08.546 [2024-07-12 09:29:54.769764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:08.546 [2024-07-12 09:29:54.769782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.769830] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:08.546 [2024-07-12 09:29:54.774979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.775028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:08.546 [2024-07-12 09:29:54.775049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.116 ms 00:24:08.546 [2024-07-12 09:29:54.775064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.775534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.775586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:08.546 [2024-07-12 09:29:54.775607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:24:08.546 [2024-07-12 09:29:54.775622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.780218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.780276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:08.546 [2024-07-12 09:29:54.780308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:24:08.546 [2024-07-12 09:29:54.780320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.787935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.788005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:08.546 [2024-07-12 09:29:54.788035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.593 ms 00:24:08.546 [2024-07-12 09:29:54.788047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.822048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.822092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:08.546 [2024-07-12 09:29:54.822109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.932 ms 00:24:08.546 [2024-07-12 09:29:54.822137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.841466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.841509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:08.546 [2024-07-12 09:29:54.841543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.281 ms 00:24:08.546 [2024-07-12 09:29:54.841570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.841762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.841784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:08.546 [2024-07-12 09:29:54.841803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:08.546 [2024-07-12 09:29:54.841814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.546 [2024-07-12 09:29:54.875848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.546 [2024-07-12 09:29:54.875892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:08.546 [2024-07-12 09:29:54.875909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.013 ms 00:24:08.546 [2024-07-12 09:29:54.875935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.805 [2024-07-12 09:29:54.909587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.805 [2024-07-12 09:29:54.909644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:08.805 [2024-07-12 09:29:54.909692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.606 ms 00:24:08.805 [2024-07-12 09:29:54.909703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.805 [2024-07-12 09:29:54.942608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.805 [2024-07-12 09:29:54.942649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:08.805 [2024-07-12 09:29:54.942681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.860 ms 00:24:08.805 [2024-07-12 09:29:54.942692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.805 [2024-07-12 09:29:54.975360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.805 [2024-07-12 09:29:54.975402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:08.805 [2024-07-12 09:29:54.975419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.577 ms 00:24:08.805 [2024-07-12 09:29:54.975431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.805 [2024-07-12 09:29:54.975481] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:08.805 [2024-07-12 09:29:54.975506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.975979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:08.805 [2024-07-12 09:29:54.976359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:08.806 [2024-07-12 09:29:54.976841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:08.806 [2024-07-12 09:29:54.976853] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe09ee24-544c-46a4-a924-452dd5e6cb29 00:24:08.806 [2024-07-12 09:29:54.976865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:08.806 [2024-07-12 09:29:54.976883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:08.806 [2024-07-12 09:29:54.976894] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:08.806 [2024-07-12 09:29:54.976905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:08.806 [2024-07-12 09:29:54.976916] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:08.806 [2024-07-12 09:29:54.976927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:08.806 [2024-07-12 09:29:54.976938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:08.806 [2024-07-12 09:29:54.976948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:08.806 [2024-07-12 09:29:54.976958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:08.806 [2024-07-12 09:29:54.976969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.806 [2024-07-12 09:29:54.976980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:08.806 [2024-07-12 09:29:54.976993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:24:08.806 [2024-07-12 09:29:54.977008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:54.994879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.806 [2024-07-12 09:29:54.994920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:08.806 [2024-07-12 09:29:54.994968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.826 ms 00:24:08.806 [2024-07-12 09:29:54.994980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:54.995468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.806 [2024-07-12 09:29:54.995494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:08.806 [2024-07-12 09:29:54.995508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:24:08.806 [2024-07-12 09:29:54.995520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:55.035820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.806 [2024-07-12 09:29:55.035885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.806 [2024-07-12 09:29:55.035901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.806 [2024-07-12 09:29:55.035929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:55.035999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.806 [2024-07-12 09:29:55.036014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.806 [2024-07-12 09:29:55.036041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.806 [2024-07-12 09:29:55.036052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:55.036157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.806 [2024-07-12 09:29:55.036177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.806 [2024-07-12 09:29:55.036190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.806 [2024-07-12 09:29:55.036201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:55.036223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.806 [2024-07-12 09:29:55.036290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.806 [2024-07-12 09:29:55.036303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.806 [2024-07-12 09:29:55.036314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.806 [2024-07-12 09:29:55.142797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.806 [2024-07-12 09:29:55.142858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.806 [2024-07-12 09:29:55.142878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.806 [2024-07-12 09:29:55.142890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.065 [2024-07-12 09:29:55.233264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.065 [2024-07-12 09:29:55.233394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.065 [2024-07-12 09:29:55.233478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.065 [2024-07-12 09:29:55.233644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:09.065 [2024-07-12 09:29:55.233733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.065 [2024-07-12 09:29:55.233820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.233880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.065 [2024-07-12 09:29:55.233896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.065 [2024-07-12 09:29:55.233908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.065 [2024-07-12 09:29:55.233919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.065 [2024-07-12 09:29:55.234090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 464.669 ms, result 0 00:24:10.000 00:24:10.000 00:24:10.258 09:29:56 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:12.789 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:12.789 09:29:58 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:12.789 [2024-07-12 09:29:58.818333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:12.789 [2024-07-12 09:29:58.818508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82683 ] 00:24:12.789 [2024-07-12 09:29:58.987495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.047 [2024-07-12 09:29:59.181151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.305 [2024-07-12 09:29:59.505929] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.305 [2024-07-12 09:29:59.506039] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.566 [2024-07-12 09:29:59.668926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.668995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:13.566 [2024-07-12 09:29:59.669017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.566 [2024-07-12 09:29:59.669044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.669146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.669184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.566 [2024-07-12 09:29:59.669197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:13.566 [2024-07-12 09:29:59.669212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.669286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:13.566 [2024-07-12 09:29:59.670276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:13.566 [2024-07-12 09:29:59.670319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.670338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.566 [2024-07-12 09:29:59.670351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:24:13.566 [2024-07-12 09:29:59.670362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.671602] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:13.566 [2024-07-12 09:29:59.688834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.688910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:13.566 [2024-07-12 09:29:59.688954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.234 ms 00:24:13.566 [2024-07-12 09:29:59.688965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.689053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.689072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:13.566 [2024-07-12 09:29:59.689088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:13.566 [2024-07-12 09:29:59.689114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.694072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.694117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.566 [2024-07-12 09:29:59.694133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:24:13.566 [2024-07-12 09:29:59.694144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.694275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.694300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.566 [2024-07-12 09:29:59.694313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:13.566 [2024-07-12 09:29:59.694324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.694389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.694408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:13.566 [2024-07-12 09:29:59.694420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:13.566 [2024-07-12 09:29:59.694431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.694465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:13.566 [2024-07-12 09:29:59.699054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.699095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.566 [2024-07-12 09:29:59.699112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:24:13.566 [2024-07-12 09:29:59.699123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.566 [2024-07-12 09:29:59.699169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.566 [2024-07-12 09:29:59.699220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:13.567 [2024-07-12 09:29:59.699252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:13.567 [2024-07-12 09:29:59.699263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.567 [2024-07-12 09:29:59.699360] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:13.567 [2024-07-12 09:29:59.699394] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:13.567 [2024-07-12 09:29:59.699481] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:13.567 [2024-07-12 09:29:59.699507] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:13.567 [2024-07-12 09:29:59.699613] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:13.567 [2024-07-12 09:29:59.699628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:13.567 [2024-07-12 09:29:59.699643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:13.567 [2024-07-12 09:29:59.699658] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:13.567 [2024-07-12 09:29:59.699671] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:13.567 [2024-07-12 09:29:59.699683] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:13.567 [2024-07-12 09:29:59.699694] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:13.567 [2024-07-12 09:29:59.699704] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:13.567 [2024-07-12 09:29:59.699715] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:13.567 [2024-07-12 09:29:59.699727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.567 [2024-07-12 09:29:59.699742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:13.567 [2024-07-12 09:29:59.699754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:24:13.567 [2024-07-12 09:29:59.699765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.567 [2024-07-12 09:29:59.699873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.567 [2024-07-12 09:29:59.699890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:13.567 [2024-07-12 09:29:59.699901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:13.567 [2024-07-12 09:29:59.699912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.567 [2024-07-12 09:29:59.700014] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:13.567 [2024-07-12 09:29:59.700031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:13.567 [2024-07-12 09:29:59.700066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:13.567 [2024-07-12 09:29:59.700099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:13.567 [2024-07-12 09:29:59.700131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.567 [2024-07-12 09:29:59.700152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:13.567 [2024-07-12 09:29:59.700162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:13.567 [2024-07-12 09:29:59.700172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.567 [2024-07-12 09:29:59.700182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:13.567 [2024-07-12 09:29:59.700192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:13.567 [2024-07-12 09:29:59.700202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:13.567 [2024-07-12 09:29:59.700239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:13.567 [2024-07-12 09:29:59.700283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:13.567 [2024-07-12 09:29:59.700328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:13.567 [2024-07-12 09:29:59.700356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:13.567 [2024-07-12 09:29:59.700385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:13.567 [2024-07-12 09:29:59.700414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.567 [2024-07-12 09:29:59.700433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:13.567 [2024-07-12 09:29:59.700443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:13.567 [2024-07-12 09:29:59.700452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.567 [2024-07-12 09:29:59.700462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:13.567 [2024-07-12 09:29:59.700471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:13.567 [2024-07-12 09:29:59.700481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:13.567 [2024-07-12 09:29:59.700500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:13.567 [2024-07-12 09:29:59.700511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700520] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:13.567 [2024-07-12 09:29:59.700532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:13.567 [2024-07-12 09:29:59.700550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.567 [2024-07-12 09:29:59.700581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:13.567 [2024-07-12 09:29:59.700592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:13.567 [2024-07-12 09:29:59.700602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:13.567 [2024-07-12 09:29:59.700612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:13.567 [2024-07-12 09:29:59.700621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:13.567 [2024-07-12 09:29:59.700631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:13.567 [2024-07-12 09:29:59.700642] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:13.567 [2024-07-12 09:29:59.700655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.567 [2024-07-12 09:29:59.700667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:13.567 [2024-07-12 09:29:59.700678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:13.567 [2024-07-12 09:29:59.700689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:13.567 [2024-07-12 09:29:59.700699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:13.567 [2024-07-12 09:29:59.700710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:13.567 [2024-07-12 09:29:59.700720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:13.567 [2024-07-12 09:29:59.700731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:13.567 [2024-07-12 09:29:59.700741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:13.567 [2024-07-12 09:29:59.700752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:13.567 [2024-07-12 09:29:59.700762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:13.567 [2024-07-12 09:29:59.700773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:13.567 [2024-07-12 09:29:59.700783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:13.567 [2024-07-12 09:29:59.700809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:13.567 [2024-07-12 09:29:59.700819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:13.567 [2024-07-12 09:29:59.700829] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:13.567 [2024-07-12 09:29:59.700841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.567 [2024-07-12 09:29:59.700853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:13.568 [2024-07-12 09:29:59.700864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:13.568 [2024-07-12 09:29:59.700874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:13.568 [2024-07-12 09:29:59.700884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:13.568 [2024-07-12 09:29:59.700896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.700913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:13.568 [2024-07-12 09:29:59.700924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:24:13.568 [2024-07-12 09:29:59.700950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.746788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.746854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.568 [2024-07-12 09:29:59.746875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.776 ms 00:24:13.568 [2024-07-12 09:29:59.746887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.747025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.747043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:13.568 [2024-07-12 09:29:59.747055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:13.568 [2024-07-12 09:29:59.747067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.788526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.788587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.568 [2024-07-12 09:29:59.788607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.353 ms 00:24:13.568 [2024-07-12 09:29:59.788619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.788686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.788703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.568 [2024-07-12 09:29:59.788715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:13.568 [2024-07-12 09:29:59.788727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.789164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.789185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.568 [2024-07-12 09:29:59.789213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:24:13.568 [2024-07-12 09:29:59.789224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.789404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.789429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.568 [2024-07-12 09:29:59.789442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:13.568 [2024-07-12 09:29:59.789452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.806971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.807017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.568 [2024-07-12 09:29:59.807051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.491 ms 00:24:13.568 [2024-07-12 09:29:59.807077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.825026] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:13.568 [2024-07-12 09:29:59.825090] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:13.568 [2024-07-12 09:29:59.825125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.825137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:13.568 [2024-07-12 09:29:59.825180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.854 ms 00:24:13.568 [2024-07-12 09:29:59.825191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.857475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.857565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:13.568 [2024-07-12 09:29:59.857613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.189 ms 00:24:13.568 [2024-07-12 09:29:59.857632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.875211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.875266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:13.568 [2024-07-12 09:29:59.875284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.517 ms 00:24:13.568 [2024-07-12 09:29:59.875295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.892197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.892245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:13.568 [2024-07-12 09:29:59.892278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:24:13.568 [2024-07-12 09:29:59.892304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.568 [2024-07-12 09:29:59.893188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.568 [2024-07-12 09:29:59.893244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:13.568 [2024-07-12 09:29:59.893263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:24:13.568 [2024-07-12 09:29:59.893275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:29:59.971931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:29:59.971998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:13.827 [2024-07-12 09:29:59.972019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.630 ms 00:24:13.827 [2024-07-12 09:29:59.972046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:29:59.985337] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:13.827 [2024-07-12 09:29:59.988026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:29:59.988063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:13.827 [2024-07-12 09:29:59.988095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.900 ms 00:24:13.827 [2024-07-12 09:29:59.988106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:29:59.988245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:29:59.988268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:13.827 [2024-07-12 09:29:59.988281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:13.827 [2024-07-12 09:29:59.988293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:29:59.988384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:29:59.988409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:13.827 [2024-07-12 09:29:59.988423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:13.827 [2024-07-12 09:29:59.988434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:29:59.988466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:29:59.988511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:13.827 [2024-07-12 09:29:59.988523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.827 [2024-07-12 09:29:59.988533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:29:59.988604] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:13.827 [2024-07-12 09:29:59.988622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:29:59.988633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:13.827 [2024-07-12 09:29:59.988649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:13.827 [2024-07-12 09:29:59.988659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:30:00.021037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:30:00.021097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:13.827 [2024-07-12 09:30:00.021117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.347 ms 00:24:13.827 [2024-07-12 09:30:00.021129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:30:00.021246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.827 [2024-07-12 09:30:00.021277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:13.827 [2024-07-12 09:30:00.021290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:13.827 [2024-07-12 09:30:00.021300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.827 [2024-07-12 09:30:00.022482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.025 ms, result 0 00:24:55.500  Copying: 27/1024 [MB] (27 MBps) Copying: 53/1024 [MB] (26 MBps) Copying: 80/1024 [MB] (27 MBps) Copying: 107/1024 [MB] (26 MBps) Copying: 134/1024 [MB] (27 MBps) Copying: 160/1024 [MB] (26 MBps) Copying: 186/1024 [MB] (25 MBps) Copying: 212/1024 [MB] (25 MBps) Copying: 238/1024 [MB] (25 MBps) Copying: 264/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (25 MBps) Copying: 315/1024 [MB] (25 MBps) Copying: 340/1024 [MB] (25 MBps) Copying: 366/1024 [MB] (25 MBps) Copying: 392/1024 [MB] (26 MBps) Copying: 418/1024 [MB] (26 MBps) Copying: 444/1024 [MB] (25 MBps) Copying: 470/1024 [MB] (25 MBps) Copying: 496/1024 [MB] (25 MBps) Copying: 520/1024 [MB] (24 MBps) Copying: 547/1024 [MB] (26 MBps) Copying: 573/1024 [MB] (26 MBps) Copying: 597/1024 [MB] (23 MBps) Copying: 621/1024 [MB] (24 MBps) Copying: 645/1024 [MB] (24 MBps) Copying: 669/1024 [MB] (23 MBps) Copying: 693/1024 [MB] (24 MBps) Copying: 717/1024 [MB] (23 MBps) Copying: 741/1024 [MB] (23 MBps) Copying: 765/1024 [MB] (23 MBps) Copying: 788/1024 [MB] (23 MBps) Copying: 812/1024 [MB] (23 MBps) Copying: 834/1024 [MB] (22 MBps) Copying: 857/1024 [MB] (22 MBps) Copying: 882/1024 [MB] (24 MBps) Copying: 906/1024 [MB] (24 MBps) Copying: 932/1024 [MB] (26 MBps) Copying: 960/1024 [MB] (27 MBps) Copying: 987/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (26 MBps) Copying: 1048116/1048576 [kB] (10108 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-12 09:30:41.623239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.500 [2024-07-12 09:30:41.623328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.500 [2024-07-12 09:30:41.623356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:55.500 [2024-07-12 09:30:41.623370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.500 [2024-07-12 09:30:41.623987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.500 [2024-07-12 09:30:41.627911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.627955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.501 [2024-07-12 09:30:41.627973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.880 ms 00:24:55.501 [2024-07-12 09:30:41.627983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.642459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.642547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.501 [2024-07-12 09:30:41.642567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:24:55.501 [2024-07-12 09:30:41.642578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.662877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.662942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.501 [2024-07-12 09:30:41.662972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.275 ms 00:24:55.501 [2024-07-12 09:30:41.662984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.670734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.670800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.501 [2024-07-12 09:30:41.670831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.709 ms 00:24:55.501 [2024-07-12 09:30:41.670842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.706541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.706637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.501 [2024-07-12 09:30:41.706689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.623 ms 00:24:55.501 [2024-07-12 09:30:41.706714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.728321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.728400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.501 [2024-07-12 09:30:41.728433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.535 ms 00:24:55.501 [2024-07-12 09:30:41.728449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.807866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.808001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.501 [2024-07-12 09:30:41.808036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.359 ms 00:24:55.501 [2024-07-12 09:30:41.808060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.501 [2024-07-12 09:30:41.845288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.501 [2024-07-12 09:30:41.845349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:55.501 [2024-07-12 09:30:41.845369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.187 ms 00:24:55.501 [2024-07-12 09:30:41.845380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.760 [2024-07-12 09:30:41.879052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.760 [2024-07-12 09:30:41.879126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:55.760 [2024-07-12 09:30:41.879159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.625 ms 00:24:55.760 [2024-07-12 09:30:41.879170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.760 [2024-07-12 09:30:41.913530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.760 [2024-07-12 09:30:41.913609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.760 [2024-07-12 09:30:41.913662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.300 ms 00:24:55.760 [2024-07-12 09:30:41.913673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.760 [2024-07-12 09:30:41.948081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.760 [2024-07-12 09:30:41.948168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.760 [2024-07-12 09:30:41.948212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.292 ms 00:24:55.760 [2024-07-12 09:30:41.948242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.760 [2024-07-12 09:30:41.948287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.760 [2024-07-12 09:30:41.948312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 93952 / 261120 wr_cnt: 1 state: open 00:24:55.760 [2024-07-12 09:30:41.948327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.760 [2024-07-12 09:30:41.948525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.948999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.761 [2024-07-12 09:30:41.949526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.761 [2024-07-12 09:30:41.949538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe09ee24-544c-46a4-a924-452dd5e6cb29 00:24:55.761 [2024-07-12 09:30:41.949551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 93952 00:24:55.761 [2024-07-12 09:30:41.949561] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 94912 00:24:55.761 [2024-07-12 09:30:41.949571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 93952 00:24:55.761 [2024-07-12 09:30:41.949583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:24:55.761 [2024-07-12 09:30:41.949593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.761 [2024-07-12 09:30:41.949609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.761 [2024-07-12 09:30:41.949621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.761 [2024-07-12 09:30:41.949630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.761 [2024-07-12 09:30:41.949640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.761 [2024-07-12 09:30:41.949651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.761 [2024-07-12 09:30:41.949666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.761 [2024-07-12 09:30:41.949678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:24:55.761 [2024-07-12 09:30:41.949688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.761 [2024-07-12 09:30:41.968444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.762 [2024-07-12 09:30:41.968483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.762 [2024-07-12 09:30:41.968530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.711 ms 00:24:55.762 [2024-07-12 09:30:41.968541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.762 [2024-07-12 09:30:41.969080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.762 [2024-07-12 09:30:41.969117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.762 [2024-07-12 09:30:41.969130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:24:55.762 [2024-07-12 09:30:41.969139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.762 [2024-07-12 09:30:42.009451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.762 [2024-07-12 09:30:42.009525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.762 [2024-07-12 09:30:42.009558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.762 [2024-07-12 09:30:42.009570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.762 [2024-07-12 09:30:42.009665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.762 [2024-07-12 09:30:42.009681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.762 [2024-07-12 09:30:42.009708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.762 [2024-07-12 09:30:42.009719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.762 [2024-07-12 09:30:42.009865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.762 [2024-07-12 09:30:42.009885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.762 [2024-07-12 09:30:42.009896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.762 [2024-07-12 09:30:42.009906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.762 [2024-07-12 09:30:42.009933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.762 [2024-07-12 09:30:42.009948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.762 [2024-07-12 09:30:42.009959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.762 [2024-07-12 09:30:42.009968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.120045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.120110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.020 [2024-07-12 09:30:42.120127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.120138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.020 [2024-07-12 09:30:42.214286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.214312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.020 [2024-07-12 09:30:42.214473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.214484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.020 [2024-07-12 09:30:42.214565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.214575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.020 [2024-07-12 09:30:42.214725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.214736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:56.020 [2024-07-12 09:30:42.214819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.214830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.020 [2024-07-12 09:30:42.214906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.214916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.214973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.020 [2024-07-12 09:30:42.214996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.020 [2024-07-12 09:30:42.215008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.020 [2024-07-12 09:30:42.215018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.020 [2024-07-12 09:30:42.215154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 597.964 ms, result 0 00:24:57.920 00:24:57.920 00:24:57.920 09:30:43 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:57.920 [2024-07-12 09:30:44.004811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:57.920 [2024-07-12 09:30:44.005522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83125 ] 00:24:57.920 [2024-07-12 09:30:44.181895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.178 [2024-07-12 09:30:44.404413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.436 [2024-07-12 09:30:44.731445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.436 [2024-07-12 09:30:44.731556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.703 [2024-07-12 09:30:44.894240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.894324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:58.703 [2024-07-12 09:30:44.894345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:58.703 [2024-07-12 09:30:44.894357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.894429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.894449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.703 [2024-07-12 09:30:44.894462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:58.703 [2024-07-12 09:30:44.894491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.894524] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:58.703 [2024-07-12 09:30:44.895460] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:58.703 [2024-07-12 09:30:44.895507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.895526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.703 [2024-07-12 09:30:44.895538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:24:58.703 [2024-07-12 09:30:44.895549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.896756] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:58.703 [2024-07-12 09:30:44.914612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.914658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:58.703 [2024-07-12 09:30:44.914676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.858 ms 00:24:58.703 [2024-07-12 09:30:44.914687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.914761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.914795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:58.703 [2024-07-12 09:30:44.914811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:58.703 [2024-07-12 09:30:44.914821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.919625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.919814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.703 [2024-07-12 09:30:44.919940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:24:58.703 [2024-07-12 09:30:44.919991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.920122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.920210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.703 [2024-07-12 09:30:44.920262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:58.703 [2024-07-12 09:30:44.920301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.920395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.920510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:58.703 [2024-07-12 09:30:44.920567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:58.703 [2024-07-12 09:30:44.920583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.920623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:58.703 [2024-07-12 09:30:44.925327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.925385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.703 [2024-07-12 09:30:44.925402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.713 ms 00:24:58.703 [2024-07-12 09:30:44.925414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.925468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.925485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:58.703 [2024-07-12 09:30:44.925498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:58.703 [2024-07-12 09:30:44.925509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.925574] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:58.703 [2024-07-12 09:30:44.925606] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:58.703 [2024-07-12 09:30:44.925651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:58.703 [2024-07-12 09:30:44.925673] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:58.703 [2024-07-12 09:30:44.925778] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:58.703 [2024-07-12 09:30:44.925793] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:58.703 [2024-07-12 09:30:44.925807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:58.703 [2024-07-12 09:30:44.925821] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:58.703 [2024-07-12 09:30:44.925834] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:58.703 [2024-07-12 09:30:44.925846] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:58.703 [2024-07-12 09:30:44.925857] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:58.703 [2024-07-12 09:30:44.925867] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:58.703 [2024-07-12 09:30:44.925877] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:58.703 [2024-07-12 09:30:44.925888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.925903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:58.703 [2024-07-12 09:30:44.925914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:24:58.703 [2024-07-12 09:30:44.925925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.926020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.703 [2024-07-12 09:30:44.926035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:58.703 [2024-07-12 09:30:44.926046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:58.703 [2024-07-12 09:30:44.926056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.703 [2024-07-12 09:30:44.926166] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:58.703 [2024-07-12 09:30:44.926196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:58.703 [2024-07-12 09:30:44.926217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:58.703 [2024-07-12 09:30:44.926250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:58.703 [2024-07-12 09:30:44.926283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.703 [2024-07-12 09:30:44.926303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:58.703 [2024-07-12 09:30:44.926314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:58.703 [2024-07-12 09:30:44.926323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.703 [2024-07-12 09:30:44.926334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:58.703 [2024-07-12 09:30:44.926344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:58.703 [2024-07-12 09:30:44.926353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:58.703 [2024-07-12 09:30:44.926373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:58.703 [2024-07-12 09:30:44.926416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:58.703 [2024-07-12 09:30:44.926446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:58.703 [2024-07-12 09:30:44.926477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:58.703 [2024-07-12 09:30:44.926506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:58.703 [2024-07-12 09:30:44.926536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.703 [2024-07-12 09:30:44.926555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:58.703 [2024-07-12 09:30:44.926565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:58.703 [2024-07-12 09:30:44.926575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.703 [2024-07-12 09:30:44.926584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:58.703 [2024-07-12 09:30:44.926601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:58.703 [2024-07-12 09:30:44.926612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:58.703 [2024-07-12 09:30:44.926633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:58.703 [2024-07-12 09:30:44.926642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926652] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:58.703 [2024-07-12 09:30:44.926663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:58.703 [2024-07-12 09:30:44.926673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.703 [2024-07-12 09:30:44.926694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:58.703 [2024-07-12 09:30:44.926704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:58.703 [2024-07-12 09:30:44.926714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:58.703 [2024-07-12 09:30:44.926725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:58.703 [2024-07-12 09:30:44.926735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:58.703 [2024-07-12 09:30:44.926745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:58.703 [2024-07-12 09:30:44.926757] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:58.703 [2024-07-12 09:30:44.926769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.703 [2024-07-12 09:30:44.926782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:58.703 [2024-07-12 09:30:44.926793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:58.703 [2024-07-12 09:30:44.926804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:58.703 [2024-07-12 09:30:44.926815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:58.703 [2024-07-12 09:30:44.926825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:58.703 [2024-07-12 09:30:44.926837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:58.703 [2024-07-12 09:30:44.926847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:58.703 [2024-07-12 09:30:44.926858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:58.703 [2024-07-12 09:30:44.926869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:58.703 [2024-07-12 09:30:44.926880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:58.703 [2024-07-12 09:30:44.926891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:58.703 [2024-07-12 09:30:44.926902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:58.703 [2024-07-12 09:30:44.926913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:58.704 [2024-07-12 09:30:44.926924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:58.704 [2024-07-12 09:30:44.926934] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:58.704 [2024-07-12 09:30:44.926947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.704 [2024-07-12 09:30:44.926959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:58.704 [2024-07-12 09:30:44.926970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:58.704 [2024-07-12 09:30:44.926982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:58.704 [2024-07-12 09:30:44.926993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:58.704 [2024-07-12 09:30:44.927005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:44.927021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:58.704 [2024-07-12 09:30:44.927032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:24:58.704 [2024-07-12 09:30:44.927042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:44.971512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:44.971582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.704 [2024-07-12 09:30:44.971603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.409 ms 00:24:58.704 [2024-07-12 09:30:44.971615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:44.971739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:44.971756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:58.704 [2024-07-12 09:30:44.971768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:58.704 [2024-07-12 09:30:44.971779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:45.013716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:45.013775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.704 [2024-07-12 09:30:45.013796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.843 ms 00:24:58.704 [2024-07-12 09:30:45.013808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:45.013876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:45.013894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.704 [2024-07-12 09:30:45.013906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:58.704 [2024-07-12 09:30:45.013917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:45.014324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:45.014345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.704 [2024-07-12 09:30:45.014358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:24:58.704 [2024-07-12 09:30:45.014368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:45.014525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:45.014545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.704 [2024-07-12 09:30:45.014557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:58.704 [2024-07-12 09:30:45.014567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:45.032141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:45.032206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.704 [2024-07-12 09:30:45.032242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.547 ms 00:24:58.704 [2024-07-12 09:30:45.032253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.704 [2024-07-12 09:30:45.049660] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:58.704 [2024-07-12 09:30:45.049722] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:58.704 [2024-07-12 09:30:45.049760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.704 [2024-07-12 09:30:45.049772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:58.704 [2024-07-12 09:30:45.049785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.340 ms 00:24:58.704 [2024-07-12 09:30:45.049797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.082007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.082055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:58.963 [2024-07-12 09:30:45.082073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.162 ms 00:24:58.963 [2024-07-12 09:30:45.082092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.099925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.099985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:58.963 [2024-07-12 09:30:45.100003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.775 ms 00:24:58.963 [2024-07-12 09:30:45.100014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.117986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.118035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:58.963 [2024-07-12 09:30:45.118069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.928 ms 00:24:58.963 [2024-07-12 09:30:45.118079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.118910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.118946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:58.963 [2024-07-12 09:30:45.118961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:24:58.963 [2024-07-12 09:30:45.118972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.202760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.202851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:58.963 [2024-07-12 09:30:45.202873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.757 ms 00:24:58.963 [2024-07-12 09:30:45.202885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.216356] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:58.963 [2024-07-12 09:30:45.219136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.219171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:58.963 [2024-07-12 09:30:45.219249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:24:58.963 [2024-07-12 09:30:45.219263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.219354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.219374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:58.963 [2024-07-12 09:30:45.219388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:58.963 [2024-07-12 09:30:45.219398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.220801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.220841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:58.963 [2024-07-12 09:30:45.220857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:24:58.963 [2024-07-12 09:30:45.220883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.220935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.220951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:58.963 [2024-07-12 09:30:45.220963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:58.963 [2024-07-12 09:30:45.220974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.221044] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:58.963 [2024-07-12 09:30:45.221062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.221077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:58.963 [2024-07-12 09:30:45.221088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:58.963 [2024-07-12 09:30:45.221099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.254777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.254832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:58.963 [2024-07-12 09:30:45.254851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.648 ms 00:24:58.963 [2024-07-12 09:30:45.254862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.254964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.963 [2024-07-12 09:30:45.254984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:58.963 [2024-07-12 09:30:45.254997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:58.963 [2024-07-12 09:30:45.255009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.963 [2024-07-12 09:30:45.258518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 363.345 ms, result 0 00:25:38.251  Copying: 16/1024 [MB] (16 MBps) Copying: 43/1024 [MB] (27 MBps) Copying: 70/1024 [MB] (27 MBps) Copying: 97/1024 [MB] (26 MBps) Copying: 124/1024 [MB] (27 MBps) Copying: 152/1024 [MB] (27 MBps) Copying: 180/1024 [MB] (27 MBps) Copying: 207/1024 [MB] (27 MBps) Copying: 233/1024 [MB] (26 MBps) Copying: 259/1024 [MB] (25 MBps) Copying: 287/1024 [MB] (28 MBps) Copying: 314/1024 [MB] (27 MBps) Copying: 341/1024 [MB] (26 MBps) Copying: 367/1024 [MB] (26 MBps) Copying: 393/1024 [MB] (26 MBps) Copying: 421/1024 [MB] (27 MBps) Copying: 448/1024 [MB] (27 MBps) Copying: 475/1024 [MB] (26 MBps) Copying: 503/1024 [MB] (27 MBps) Copying: 529/1024 [MB] (26 MBps) Copying: 556/1024 [MB] (26 MBps) Copying: 580/1024 [MB] (24 MBps) Copying: 607/1024 [MB] (26 MBps) Copying: 635/1024 [MB] (27 MBps) Copying: 662/1024 [MB] (27 MBps) Copying: 690/1024 [MB] (27 MBps) Copying: 717/1024 [MB] (27 MBps) Copying: 745/1024 [MB] (27 MBps) Copying: 773/1024 [MB] (27 MBps) Copying: 800/1024 [MB] (26 MBps) Copying: 824/1024 [MB] (24 MBps) Copying: 850/1024 [MB] (25 MBps) Copying: 873/1024 [MB] (23 MBps) Copying: 899/1024 [MB] (26 MBps) Copying: 925/1024 [MB] (26 MBps) Copying: 950/1024 [MB] (25 MBps) Copying: 975/1024 [MB] (25 MBps) Copying: 1001/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-12 09:31:24.504956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.505277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:38.251 [2024-07-12 09:31:24.505450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:38.251 [2024-07-12 09:31:24.505520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.251 [2024-07-12 09:31:24.505707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:38.251 [2024-07-12 09:31:24.510138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.510198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:38.251 [2024-07-12 09:31:24.510215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:25:38.251 [2024-07-12 09:31:24.510226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.251 [2024-07-12 09:31:24.510472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.510490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:38.251 [2024-07-12 09:31:24.510503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:25:38.251 [2024-07-12 09:31:24.510514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.251 [2024-07-12 09:31:24.516528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.516593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:38.251 [2024-07-12 09:31:24.516609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.994 ms 00:25:38.251 [2024-07-12 09:31:24.516619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.251 [2024-07-12 09:31:24.523655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.523690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:38.251 [2024-07-12 09:31:24.523705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.996 ms 00:25:38.251 [2024-07-12 09:31:24.523724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.251 [2024-07-12 09:31:24.554892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.554935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:38.251 [2024-07-12 09:31:24.554969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.098 ms 00:25:38.251 [2024-07-12 09:31:24.554980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.251 [2024-07-12 09:31:24.573148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.251 [2024-07-12 09:31:24.573217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:38.251 [2024-07-12 09:31:24.573244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.124 ms 00:25:38.251 [2024-07-12 09:31:24.573267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.513 [2024-07-12 09:31:24.678366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.513 [2024-07-12 09:31:24.678492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:38.513 [2024-07-12 09:31:24.678546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.062 ms 00:25:38.513 [2024-07-12 09:31:24.678558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.513 [2024-07-12 09:31:24.711035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.513 [2024-07-12 09:31:24.711076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:38.513 [2024-07-12 09:31:24.711108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.454 ms 00:25:38.513 [2024-07-12 09:31:24.711118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.513 [2024-07-12 09:31:24.742134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.513 [2024-07-12 09:31:24.742174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:38.513 [2024-07-12 09:31:24.742233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.976 ms 00:25:38.513 [2024-07-12 09:31:24.742245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.513 [2024-07-12 09:31:24.774096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.513 [2024-07-12 09:31:24.774162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:38.513 [2024-07-12 09:31:24.774196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.809 ms 00:25:38.513 [2024-07-12 09:31:24.774253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.513 [2024-07-12 09:31:24.806359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.513 [2024-07-12 09:31:24.806405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:38.513 [2024-07-12 09:31:24.806423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.015 ms 00:25:38.513 [2024-07-12 09:31:24.806434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.513 [2024-07-12 09:31:24.806478] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:38.513 [2024-07-12 09:31:24.806520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:25:38.513 [2024-07-12 09:31:24.806542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.806999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:38.513 [2024-07-12 09:31:24.807310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:38.514 [2024-07-12 09:31:24.807769] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:38.514 [2024-07-12 09:31:24.807780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe09ee24-544c-46a4-a924-452dd5e6cb29 00:25:38.514 [2024-07-12 09:31:24.807793] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:25:38.514 [2024-07-12 09:31:24.807811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 40896 00:25:38.514 [2024-07-12 09:31:24.807827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 39936 00:25:38.514 [2024-07-12 09:31:24.807839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0240 00:25:38.514 [2024-07-12 09:31:24.807857] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:38.514 [2024-07-12 09:31:24.807868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:38.514 [2024-07-12 09:31:24.807878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:38.514 [2024-07-12 09:31:24.807888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:38.514 [2024-07-12 09:31:24.807898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:38.514 [2024-07-12 09:31:24.807910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.514 [2024-07-12 09:31:24.807925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:38.514 [2024-07-12 09:31:24.807936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:25:38.514 [2024-07-12 09:31:24.807947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.514 [2024-07-12 09:31:24.824785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.514 [2024-07-12 09:31:24.824829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:38.514 [2024-07-12 09:31:24.824847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.787 ms 00:25:38.514 [2024-07-12 09:31:24.824873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.514 [2024-07-12 09:31:24.825325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.514 [2024-07-12 09:31:24.825346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:38.514 [2024-07-12 09:31:24.825359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:38.514 [2024-07-12 09:31:24.825370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.773 [2024-07-12 09:31:24.863073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.773 [2024-07-12 09:31:24.863133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.773 [2024-07-12 09:31:24.863150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.773 [2024-07-12 09:31:24.863167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.773 [2024-07-12 09:31:24.863259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.773 [2024-07-12 09:31:24.863278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.774 [2024-07-12 09:31:24.863289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:24.863300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:24.863390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:24.863410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.774 [2024-07-12 09:31:24.863422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:24.863432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:24.863460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:24.863472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.774 [2024-07-12 09:31:24.863494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:24.863505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:24.964026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:24.964091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.774 [2024-07-12 09:31:24.964109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:24.964127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.774 [2024-07-12 09:31:25.051317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.774 [2024-07-12 09:31:25.051429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.774 [2024-07-12 09:31:25.051524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.774 [2024-07-12 09:31:25.051689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.774 [2024-07-12 09:31:25.051784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.774 [2024-07-12 09:31:25.051880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.051941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.774 [2024-07-12 09:31:25.051962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.774 [2024-07-12 09:31:25.051974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.774 [2024-07-12 09:31:25.051984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.774 [2024-07-12 09:31:25.052154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.141 ms, result 0 00:25:40.149 00:25:40.149 00:25:40.149 09:31:26 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:42.682 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.682 Process with pid 81605 is not found 00:25:42.682 Remove shared memory files 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81605 00:25:42.682 09:31:28 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81605 ']' 00:25:42.682 09:31:28 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81605 00:25:42.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81605) - No such process 00:25:42.682 09:31:28 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 81605 is not found' 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:42.682 09:31:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:42.682 ************************************ 00:25:42.682 END TEST ftl_restore 00:25:42.682 ************************************ 00:25:42.682 00:25:42.682 real 3m15.289s 00:25:42.682 user 3m1.878s 00:25:42.682 sys 0m15.645s 00:25:42.682 09:31:28 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:42.682 09:31:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:42.682 09:31:28 ftl -- common/autotest_common.sh@1142 -- # return 0 00:25:42.682 09:31:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:42.682 09:31:28 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:42.682 09:31:28 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.682 09:31:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:42.682 ************************************ 00:25:42.682 START TEST ftl_dirty_shutdown 00:25:42.682 ************************************ 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:42.682 * Looking for test storage... 00:25:42.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83624 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83624 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83624 ']' 00:25:42.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.682 09:31:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:42.682 [2024-07-12 09:31:28.909857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:42.683 [2024-07-12 09:31:28.910007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83624 ] 00:25:42.941 [2024-07-12 09:31:29.072352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.941 [2024-07-12 09:31:29.262896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:43.877 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:44.135 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:44.136 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:44.394 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:44.394 { 00:25:44.394 "name": "nvme0n1", 00:25:44.394 "aliases": [ 00:25:44.394 "79e27544-7d9b-47c3-9916-35a00141c6f9" 00:25:44.394 ], 00:25:44.394 "product_name": "NVMe disk", 00:25:44.394 "block_size": 4096, 00:25:44.394 "num_blocks": 1310720, 00:25:44.394 "uuid": "79e27544-7d9b-47c3-9916-35a00141c6f9", 00:25:44.394 "assigned_rate_limits": { 00:25:44.394 "rw_ios_per_sec": 0, 00:25:44.394 "rw_mbytes_per_sec": 0, 00:25:44.394 "r_mbytes_per_sec": 0, 00:25:44.394 "w_mbytes_per_sec": 0 00:25:44.394 }, 00:25:44.394 "claimed": true, 00:25:44.394 "claim_type": "read_many_write_one", 00:25:44.394 "zoned": false, 00:25:44.394 "supported_io_types": { 00:25:44.394 "read": true, 00:25:44.394 "write": true, 00:25:44.394 "unmap": true, 00:25:44.394 "flush": true, 00:25:44.394 "reset": true, 00:25:44.394 "nvme_admin": true, 00:25:44.394 "nvme_io": true, 00:25:44.394 "nvme_io_md": false, 00:25:44.394 "write_zeroes": true, 00:25:44.394 "zcopy": false, 00:25:44.394 "get_zone_info": false, 00:25:44.394 "zone_management": false, 00:25:44.394 "zone_append": false, 00:25:44.394 "compare": true, 00:25:44.394 "compare_and_write": false, 00:25:44.394 "abort": true, 00:25:44.394 "seek_hole": false, 00:25:44.394 "seek_data": false, 00:25:44.394 "copy": true, 00:25:44.394 "nvme_iov_md": false 00:25:44.394 }, 00:25:44.394 "driver_specific": { 00:25:44.394 "nvme": [ 00:25:44.394 { 00:25:44.394 "pci_address": "0000:00:11.0", 00:25:44.394 "trid": { 00:25:44.394 "trtype": "PCIe", 00:25:44.394 "traddr": "0000:00:11.0" 00:25:44.394 }, 00:25:44.394 "ctrlr_data": { 00:25:44.394 "cntlid": 0, 00:25:44.394 "vendor_id": "0x1b36", 00:25:44.394 "model_number": "QEMU NVMe Ctrl", 00:25:44.394 "serial_number": "12341", 00:25:44.394 "firmware_revision": "8.0.0", 00:25:44.394 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:44.394 "oacs": { 00:25:44.394 "security": 0, 00:25:44.394 "format": 1, 00:25:44.394 "firmware": 0, 00:25:44.394 "ns_manage": 1 00:25:44.394 }, 00:25:44.394 "multi_ctrlr": false, 00:25:44.394 "ana_reporting": false 00:25:44.394 }, 00:25:44.394 "vs": { 00:25:44.394 "nvme_version": "1.4" 00:25:44.394 }, 00:25:44.394 "ns_data": { 00:25:44.394 "id": 1, 00:25:44.394 "can_share": false 00:25:44.394 } 00:25:44.394 } 00:25:44.394 ], 00:25:44.395 "mp_policy": "active_passive" 00:25:44.395 } 00:25:44.395 } 00:25:44.395 ]' 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:44.395 09:31:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:44.961 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ab26e56a-338d-4e2c-8a66-7c960f22dbf6 00:25:44.961 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:44.961 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab26e56a-338d-4e2c-8a66-7c960f22dbf6 00:25:44.961 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:45.527 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f31e56df-09a0-4b5e-ace8-f0531d46a6c2 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f31e56df-09a0-4b5e-ace8-f0531d46a6c2 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:45.528 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:45.786 09:31:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:45.786 09:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:45.786 09:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:45.786 09:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:45.786 09:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:45.786 09:31:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:45.786 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:45.786 { 00:25:45.786 "name": "85bd60f8-e028-4274-b98d-83eca64d69bd", 00:25:45.786 "aliases": [ 00:25:45.786 "lvs/nvme0n1p0" 00:25:45.786 ], 00:25:45.786 "product_name": "Logical Volume", 00:25:45.786 "block_size": 4096, 00:25:45.786 "num_blocks": 26476544, 00:25:45.786 "uuid": "85bd60f8-e028-4274-b98d-83eca64d69bd", 00:25:45.786 "assigned_rate_limits": { 00:25:45.786 "rw_ios_per_sec": 0, 00:25:45.786 "rw_mbytes_per_sec": 0, 00:25:45.786 "r_mbytes_per_sec": 0, 00:25:45.786 "w_mbytes_per_sec": 0 00:25:45.786 }, 00:25:45.786 "claimed": false, 00:25:45.786 "zoned": false, 00:25:45.786 "supported_io_types": { 00:25:45.786 "read": true, 00:25:45.786 "write": true, 00:25:45.786 "unmap": true, 00:25:45.786 "flush": false, 00:25:45.786 "reset": true, 00:25:45.786 "nvme_admin": false, 00:25:45.786 "nvme_io": false, 00:25:45.786 "nvme_io_md": false, 00:25:45.786 "write_zeroes": true, 00:25:45.786 "zcopy": false, 00:25:45.786 "get_zone_info": false, 00:25:45.786 "zone_management": false, 00:25:45.786 "zone_append": false, 00:25:45.786 "compare": false, 00:25:45.786 "compare_and_write": false, 00:25:45.786 "abort": false, 00:25:45.786 "seek_hole": true, 00:25:45.786 "seek_data": true, 00:25:45.786 "copy": false, 00:25:45.786 "nvme_iov_md": false 00:25:45.787 }, 00:25:45.787 "driver_specific": { 00:25:45.787 "lvol": { 00:25:45.787 "lvol_store_uuid": "f31e56df-09a0-4b5e-ace8-f0531d46a6c2", 00:25:45.787 "base_bdev": "nvme0n1", 00:25:45.787 "thin_provision": true, 00:25:45.787 "num_allocated_clusters": 0, 00:25:45.787 "snapshot": false, 00:25:45.787 "clone": false, 00:25:45.787 "esnap_clone": false 00:25:45.787 } 00:25:45.787 } 00:25:45.787 } 00:25:45.787 ]' 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:46.045 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:46.304 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:46.562 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:46.562 { 00:25:46.562 "name": "85bd60f8-e028-4274-b98d-83eca64d69bd", 00:25:46.562 "aliases": [ 00:25:46.562 "lvs/nvme0n1p0" 00:25:46.562 ], 00:25:46.562 "product_name": "Logical Volume", 00:25:46.562 "block_size": 4096, 00:25:46.562 "num_blocks": 26476544, 00:25:46.562 "uuid": "85bd60f8-e028-4274-b98d-83eca64d69bd", 00:25:46.562 "assigned_rate_limits": { 00:25:46.562 "rw_ios_per_sec": 0, 00:25:46.562 "rw_mbytes_per_sec": 0, 00:25:46.562 "r_mbytes_per_sec": 0, 00:25:46.562 "w_mbytes_per_sec": 0 00:25:46.562 }, 00:25:46.562 "claimed": false, 00:25:46.562 "zoned": false, 00:25:46.562 "supported_io_types": { 00:25:46.562 "read": true, 00:25:46.562 "write": true, 00:25:46.562 "unmap": true, 00:25:46.562 "flush": false, 00:25:46.562 "reset": true, 00:25:46.562 "nvme_admin": false, 00:25:46.562 "nvme_io": false, 00:25:46.562 "nvme_io_md": false, 00:25:46.562 "write_zeroes": true, 00:25:46.562 "zcopy": false, 00:25:46.562 "get_zone_info": false, 00:25:46.562 "zone_management": false, 00:25:46.562 "zone_append": false, 00:25:46.562 "compare": false, 00:25:46.562 "compare_and_write": false, 00:25:46.562 "abort": false, 00:25:46.562 "seek_hole": true, 00:25:46.562 "seek_data": true, 00:25:46.562 "copy": false, 00:25:46.562 "nvme_iov_md": false 00:25:46.562 }, 00:25:46.562 "driver_specific": { 00:25:46.562 "lvol": { 00:25:46.562 "lvol_store_uuid": "f31e56df-09a0-4b5e-ace8-f0531d46a6c2", 00:25:46.562 "base_bdev": "nvme0n1", 00:25:46.562 "thin_provision": true, 00:25:46.562 "num_allocated_clusters": 0, 00:25:46.562 "snapshot": false, 00:25:46.562 "clone": false, 00:25:46.562 "esnap_clone": false 00:25:46.562 } 00:25:46.562 } 00:25:46.562 } 00:25:46.562 ]' 00:25:46.562 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:46.562 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:46.562 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:46.819 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:46.819 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:46.819 09:31:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:46.819 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:46.819 09:31:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:47.076 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 85bd60f8-e028-4274-b98d-83eca64d69bd 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:47.333 { 00:25:47.333 "name": "85bd60f8-e028-4274-b98d-83eca64d69bd", 00:25:47.333 "aliases": [ 00:25:47.333 "lvs/nvme0n1p0" 00:25:47.333 ], 00:25:47.333 "product_name": "Logical Volume", 00:25:47.333 "block_size": 4096, 00:25:47.333 "num_blocks": 26476544, 00:25:47.333 "uuid": "85bd60f8-e028-4274-b98d-83eca64d69bd", 00:25:47.333 "assigned_rate_limits": { 00:25:47.333 "rw_ios_per_sec": 0, 00:25:47.333 "rw_mbytes_per_sec": 0, 00:25:47.333 "r_mbytes_per_sec": 0, 00:25:47.333 "w_mbytes_per_sec": 0 00:25:47.333 }, 00:25:47.333 "claimed": false, 00:25:47.333 "zoned": false, 00:25:47.333 "supported_io_types": { 00:25:47.333 "read": true, 00:25:47.333 "write": true, 00:25:47.333 "unmap": true, 00:25:47.333 "flush": false, 00:25:47.333 "reset": true, 00:25:47.333 "nvme_admin": false, 00:25:47.333 "nvme_io": false, 00:25:47.333 "nvme_io_md": false, 00:25:47.333 "write_zeroes": true, 00:25:47.333 "zcopy": false, 00:25:47.333 "get_zone_info": false, 00:25:47.333 "zone_management": false, 00:25:47.333 "zone_append": false, 00:25:47.333 "compare": false, 00:25:47.333 "compare_and_write": false, 00:25:47.333 "abort": false, 00:25:47.333 "seek_hole": true, 00:25:47.333 "seek_data": true, 00:25:47.333 "copy": false, 00:25:47.333 "nvme_iov_md": false 00:25:47.333 }, 00:25:47.333 "driver_specific": { 00:25:47.333 "lvol": { 00:25:47.333 "lvol_store_uuid": "f31e56df-09a0-4b5e-ace8-f0531d46a6c2", 00:25:47.333 "base_bdev": "nvme0n1", 00:25:47.333 "thin_provision": true, 00:25:47.333 "num_allocated_clusters": 0, 00:25:47.333 "snapshot": false, 00:25:47.333 "clone": false, 00:25:47.333 "esnap_clone": false 00:25:47.333 } 00:25:47.333 } 00:25:47.333 } 00:25:47.333 ]' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 85bd60f8-e028-4274-b98d-83eca64d69bd --l2p_dram_limit 10' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:47.333 09:31:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 85bd60f8-e028-4274-b98d-83eca64d69bd --l2p_dram_limit 10 -c nvc0n1p0 00:25:47.592 [2024-07-12 09:31:33.752112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.752191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.592 [2024-07-12 09:31:33.752232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:47.592 [2024-07-12 09:31:33.752249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.752333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.752355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.592 [2024-07-12 09:31:33.752368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:47.592 [2024-07-12 09:31:33.752381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.752411] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.592 [2024-07-12 09:31:33.753385] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.592 [2024-07-12 09:31:33.753420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.753439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.592 [2024-07-12 09:31:33.753453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:25:47.592 [2024-07-12 09:31:33.753466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.753594] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7eee1cb7-22b3-4cdf-8b62-0ea030119999 00:25:47.592 [2024-07-12 09:31:33.754610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.754642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:47.592 [2024-07-12 09:31:33.754660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:47.592 [2024-07-12 09:31:33.754672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.759245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.759295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.592 [2024-07-12 09:31:33.759336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.496 ms 00:25:47.592 [2024-07-12 09:31:33.759363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.759511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.759533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.592 [2024-07-12 09:31:33.759549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:47.592 [2024-07-12 09:31:33.759561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.759649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.759668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.592 [2024-07-12 09:31:33.759683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:47.592 [2024-07-12 09:31:33.759697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.759733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.592 [2024-07-12 09:31:33.764657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.764701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.592 [2024-07-12 09:31:33.764733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:25:47.592 [2024-07-12 09:31:33.764748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.764810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.764844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.592 [2024-07-12 09:31:33.764857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:47.592 [2024-07-12 09:31:33.764870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.764923] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:47.592 [2024-07-12 09:31:33.765106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.592 [2024-07-12 09:31:33.765127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.592 [2024-07-12 09:31:33.765147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:47.592 [2024-07-12 09:31:33.765164] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.592 [2024-07-12 09:31:33.765179] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.592 [2024-07-12 09:31:33.765192] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:47.592 [2024-07-12 09:31:33.765205] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.592 [2024-07-12 09:31:33.765219] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.592 [2024-07-12 09:31:33.765234] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.592 [2024-07-12 09:31:33.765266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.765284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.592 [2024-07-12 09:31:33.765297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:25:47.592 [2024-07-12 09:31:33.765311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.765405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.592 [2024-07-12 09:31:33.765424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.592 [2024-07-12 09:31:33.765437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:47.592 [2024-07-12 09:31:33.765450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.592 [2024-07-12 09:31:33.765562] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.592 [2024-07-12 09:31:33.765584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.592 [2024-07-12 09:31:33.765608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.592 [2024-07-12 09:31:33.765624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.592 [2024-07-12 09:31:33.765636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.592 [2024-07-12 09:31:33.765648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.592 [2024-07-12 09:31:33.765659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:47.592 [2024-07-12 09:31:33.765672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.592 [2024-07-12 09:31:33.765683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:47.592 [2024-07-12 09:31:33.765705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.592 [2024-07-12 09:31:33.765716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.592 [2024-07-12 09:31:33.765729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:47.592 [2024-07-12 09:31:33.765740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.592 [2024-07-12 09:31:33.765754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.593 [2024-07-12 09:31:33.765765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:47.593 [2024-07-12 09:31:33.765778] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.593 [2024-07-12 09:31:33.765805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:47.593 [2024-07-12 09:31:33.765816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.593 [2024-07-12 09:31:33.765839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.593 [2024-07-12 09:31:33.765862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.593 [2024-07-12 09:31:33.765874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.593 [2024-07-12 09:31:33.765896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.593 [2024-07-12 09:31:33.765907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.593 [2024-07-12 09:31:33.765930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.593 [2024-07-12 09:31:33.765942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.593 [2024-07-12 09:31:33.765965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.593 [2024-07-12 09:31:33.765975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:47.593 [2024-07-12 09:31:33.765990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.593 [2024-07-12 09:31:33.766000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.593 [2024-07-12 09:31:33.766012] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:47.593 [2024-07-12 09:31:33.766023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.593 [2024-07-12 09:31:33.766035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.593 [2024-07-12 09:31:33.766045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:47.593 [2024-07-12 09:31:33.766059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.593 [2024-07-12 09:31:33.766069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.593 [2024-07-12 09:31:33.766081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:47.593 [2024-07-12 09:31:33.766092] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.593 [2024-07-12 09:31:33.766104] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.593 [2024-07-12 09:31:33.766115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.593 [2024-07-12 09:31:33.766128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.593 [2024-07-12 09:31:33.766140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.593 [2024-07-12 09:31:33.766154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.593 [2024-07-12 09:31:33.766167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.593 [2024-07-12 09:31:33.766181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.593 [2024-07-12 09:31:33.766209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.593 [2024-07-12 09:31:33.766222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.593 [2024-07-12 09:31:33.766233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.593 [2024-07-12 09:31:33.766256] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.593 [2024-07-12 09:31:33.766270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:47.593 [2024-07-12 09:31:33.766300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:47.593 [2024-07-12 09:31:33.766313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:47.593 [2024-07-12 09:31:33.766341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:47.593 [2024-07-12 09:31:33.766358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:47.593 [2024-07-12 09:31:33.766370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:47.593 [2024-07-12 09:31:33.766383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:47.593 [2024-07-12 09:31:33.766394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:47.593 [2024-07-12 09:31:33.766409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:47.593 [2024-07-12 09:31:33.766421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:47.593 [2024-07-12 09:31:33.766486] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.593 [2024-07-12 09:31:33.766499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.593 [2024-07-12 09:31:33.766525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.593 [2024-07-12 09:31:33.766538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.593 [2024-07-12 09:31:33.766550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.593 [2024-07-12 09:31:33.766564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.593 [2024-07-12 09:31:33.766576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.593 [2024-07-12 09:31:33.766590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:25:47.593 [2024-07-12 09:31:33.766601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.593 [2024-07-12 09:31:33.766658] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:47.593 [2024-07-12 09:31:33.766675] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:50.875 [2024-07-12 09:31:36.688014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.688078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:50.875 [2024-07-12 09:31:36.688103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2921.367 ms 00:25:50.875 [2024-07-12 09:31:36.688117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.721679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.721733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.875 [2024-07-12 09:31:36.721757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.281 ms 00:25:50.875 [2024-07-12 09:31:36.721770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.721951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.721971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.875 [2024-07-12 09:31:36.721987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:50.875 [2024-07-12 09:31:36.722002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.761012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.761071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.875 [2024-07-12 09:31:36.761094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.951 ms 00:25:50.875 [2024-07-12 09:31:36.761107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.761168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.761213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.875 [2024-07-12 09:31:36.761231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:50.875 [2024-07-12 09:31:36.761243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.761616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.761642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.875 [2024-07-12 09:31:36.761659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:25:50.875 [2024-07-12 09:31:36.761671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.761818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.761837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.875 [2024-07-12 09:31:36.761855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:25:50.875 [2024-07-12 09:31:36.761866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.779127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.779179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.875 [2024-07-12 09:31:36.779218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.229 ms 00:25:50.875 [2024-07-12 09:31:36.779231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.793245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:50.875 [2024-07-12 09:31:36.796032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.796073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:50.875 [2024-07-12 09:31:36.796092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.689 ms 00:25:50.875 [2024-07-12 09:31:36.796106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.916382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.916449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:50.875 [2024-07-12 09:31:36.916471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.232 ms 00:25:50.875 [2024-07-12 09:31:36.916487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.916700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.916726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:50.875 [2024-07-12 09:31:36.916741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:25:50.875 [2024-07-12 09:31:36.916757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.949722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.949774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:50.875 [2024-07-12 09:31:36.949794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.900 ms 00:25:50.875 [2024-07-12 09:31:36.949809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.982175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.982251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:50.875 [2024-07-12 09:31:36.982271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.330 ms 00:25:50.875 [2024-07-12 09:31:36.982285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:36.982995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:36.983031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:50.875 [2024-07-12 09:31:36.983047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:25:50.875 [2024-07-12 09:31:36.983064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.079109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:37.079239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:50.875 [2024-07-12 09:31:37.079262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.979 ms 00:25:50.875 [2024-07-12 09:31:37.079281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.113410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:37.113493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:50.875 [2024-07-12 09:31:37.113512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.074 ms 00:25:50.875 [2024-07-12 09:31:37.113526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.147747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:37.147798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:50.875 [2024-07-12 09:31:37.147816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.153 ms 00:25:50.875 [2024-07-12 09:31:37.147830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.181769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:37.181849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:50.875 [2024-07-12 09:31:37.181868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.889 ms 00:25:50.875 [2024-07-12 09:31:37.181883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.181952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:37.181976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:50.875 [2024-07-12 09:31:37.181990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:50.875 [2024-07-12 09:31:37.182007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.182135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.875 [2024-07-12 09:31:37.182174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:50.875 [2024-07-12 09:31:37.182190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:50.875 [2024-07-12 09:31:37.182204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.875 [2024-07-12 09:31:37.183342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3430.740 ms, result 0 00:25:50.875 { 00:25:50.875 "name": "ftl0", 00:25:50.875 "uuid": "7eee1cb7-22b3-4cdf-8b62-0ea030119999" 00:25:50.875 } 00:25:50.875 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:50.875 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:51.442 /dev/nbd0 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:51.442 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:51.700 1+0 records in 00:25:51.700 1+0 records out 00:25:51.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430059 s, 9.5 MB/s 00:25:51.700 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:51.700 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:25:51.700 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:51.700 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:51.700 09:31:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:25:51.700 09:31:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:51.700 [2024-07-12 09:31:37.897249] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:51.700 [2024-07-12 09:31:37.897621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83772 ] 00:25:51.958 [2024-07-12 09:31:38.070481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.958 [2024-07-12 09:31:38.295826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.510  Copying: 170/1024 [MB] (170 MBps) Copying: 341/1024 [MB] (170 MBps) Copying: 511/1024 [MB] (170 MBps) Copying: 681/1024 [MB] (170 MBps) Copying: 850/1024 [MB] (168 MBps) Copying: 1019/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 169 MBps) 00:25:59.510 00:25:59.510 09:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:02.039 09:31:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:02.039 [2024-07-12 09:31:48.071258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:02.039 [2024-07-12 09:31:48.071434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83877 ] 00:26:02.039 [2024-07-12 09:31:48.242037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.298 [2024-07-12 09:31:48.427370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.752  Copying: 15/1024 [MB] (15 MBps) Copying: 30/1024 [MB] (15 MBps) Copying: 46/1024 [MB] (15 MBps) Copying: 61/1024 [MB] (15 MBps) Copying: 77/1024 [MB] (15 MBps) Copying: 93/1024 [MB] (16 MBps) Copying: 109/1024 [MB] (16 MBps) Copying: 126/1024 [MB] (16 MBps) Copying: 142/1024 [MB] (15 MBps) Copying: 158/1024 [MB] (16 MBps) Copying: 175/1024 [MB] (17 MBps) Copying: 191/1024 [MB] (16 MBps) Copying: 207/1024 [MB] (16 MBps) Copying: 223/1024 [MB] (15 MBps) Copying: 239/1024 [MB] (15 MBps) Copying: 255/1024 [MB] (16 MBps) Copying: 272/1024 [MB] (16 MBps) Copying: 289/1024 [MB] (16 MBps) Copying: 305/1024 [MB] (16 MBps) Copying: 322/1024 [MB] (16 MBps) Copying: 338/1024 [MB] (16 MBps) Copying: 355/1024 [MB] (17 MBps) Copying: 372/1024 [MB] (16 MBps) Copying: 388/1024 [MB] (16 MBps) Copying: 404/1024 [MB] (16 MBps) Copying: 421/1024 [MB] (16 MBps) Copying: 437/1024 [MB] (16 MBps) Copying: 454/1024 [MB] (16 MBps) Copying: 471/1024 [MB] (16 MBps) Copying: 487/1024 [MB] (16 MBps) Copying: 504/1024 [MB] (16 MBps) Copying: 521/1024 [MB] (16 MBps) Copying: 537/1024 [MB] (16 MBps) Copying: 554/1024 [MB] (16 MBps) Copying: 571/1024 [MB] (16 MBps) Copying: 587/1024 [MB] (16 MBps) Copying: 604/1024 [MB] (16 MBps) Copying: 620/1024 [MB] (16 MBps) Copying: 637/1024 [MB] (16 MBps) Copying: 653/1024 [MB] (16 MBps) Copying: 670/1024 [MB] (16 MBps) Copying: 686/1024 [MB] (16 MBps) Copying: 703/1024 [MB] (16 MBps) Copying: 720/1024 [MB] (16 MBps) Copying: 736/1024 [MB] (16 MBps) Copying: 753/1024 [MB] (16 MBps) Copying: 769/1024 [MB] (16 MBps) Copying: 785/1024 [MB] (16 MBps) Copying: 801/1024 [MB] (16 MBps) Copying: 818/1024 [MB] (16 MBps) Copying: 835/1024 [MB] (16 MBps) Copying: 851/1024 [MB] (16 MBps) Copying: 867/1024 [MB] (16 MBps) Copying: 884/1024 [MB] (16 MBps) Copying: 900/1024 [MB] (16 MBps) Copying: 916/1024 [MB] (15 MBps) Copying: 932/1024 [MB] (15 MBps) Copying: 947/1024 [MB] (15 MBps) Copying: 963/1024 [MB] (15 MBps) Copying: 978/1024 [MB] (15 MBps) Copying: 994/1024 [MB] (15 MBps) Copying: 1010/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:27:06.752 00:27:06.752 09:32:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:06.752 09:32:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:06.752 09:32:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:07.010 [2024-07-12 09:32:53.298701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.010 [2024-07-12 09:32:53.298768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:07.010 [2024-07-12 09:32:53.298806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:07.010 [2024-07-12 09:32:53.298820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.010 [2024-07-12 09:32:53.298865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:07.010 [2024-07-12 09:32:53.302242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.010 [2024-07-12 09:32:53.302280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:07.010 [2024-07-12 09:32:53.302297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.352 ms 00:27:07.010 [2024-07-12 09:32:53.302314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.010 [2024-07-12 09:32:53.303890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.010 [2024-07-12 09:32:53.303945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:07.010 [2024-07-12 09:32:53.303964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:27:07.010 [2024-07-12 09:32:53.303977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.010 [2024-07-12 09:32:53.320101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.010 [2024-07-12 09:32:53.320173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:07.010 [2024-07-12 09:32:53.320213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.091 ms 00:27:07.010 [2024-07-12 09:32:53.320232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.010 [2024-07-12 09:32:53.326991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.010 [2024-07-12 09:32:53.327056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:07.010 [2024-07-12 09:32:53.327075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.709 ms 00:27:07.010 [2024-07-12 09:32:53.327088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.010 [2024-07-12 09:32:53.358732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.010 [2024-07-12 09:32:53.358811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:07.010 [2024-07-12 09:32:53.358833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.499 ms 00:27:07.010 [2024-07-12 09:32:53.358847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.269 [2024-07-12 09:32:53.377763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.269 [2024-07-12 09:32:53.377845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:07.269 [2024-07-12 09:32:53.377871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.847 ms 00:27:07.269 [2024-07-12 09:32:53.377886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.269 [2024-07-12 09:32:53.378105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.269 [2024-07-12 09:32:53.378133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:07.269 [2024-07-12 09:32:53.378148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:27:07.269 [2024-07-12 09:32:53.378161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.269 [2024-07-12 09:32:53.410078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.269 [2024-07-12 09:32:53.410154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:07.269 [2024-07-12 09:32:53.410175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.864 ms 00:27:07.269 [2024-07-12 09:32:53.410220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.269 [2024-07-12 09:32:53.441565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.269 [2024-07-12 09:32:53.441641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:07.269 [2024-07-12 09:32:53.441662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.277 ms 00:27:07.269 [2024-07-12 09:32:53.441676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.269 [2024-07-12 09:32:53.472803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.269 [2024-07-12 09:32:53.472885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:07.269 [2024-07-12 09:32:53.472907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.053 ms 00:27:07.269 [2024-07-12 09:32:53.472921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.270 [2024-07-12 09:32:53.504035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.270 [2024-07-12 09:32:53.504109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:07.270 [2024-07-12 09:32:53.504131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.960 ms 00:27:07.270 [2024-07-12 09:32:53.504145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.270 [2024-07-12 09:32:53.504232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:07.270 [2024-07-12 09:32:53.504265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.504998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:07.270 [2024-07-12 09:32:53.505513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:07.271 [2024-07-12 09:32:53.505655] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:07.271 [2024-07-12 09:32:53.505667] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7eee1cb7-22b3-4cdf-8b62-0ea030119999 00:27:07.271 [2024-07-12 09:32:53.505682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:07.271 [2024-07-12 09:32:53.505693] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:07.271 [2024-07-12 09:32:53.505714] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:07.271 [2024-07-12 09:32:53.505727] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:07.271 [2024-07-12 09:32:53.505750] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:07.271 [2024-07-12 09:32:53.505762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:07.271 [2024-07-12 09:32:53.505775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:07.271 [2024-07-12 09:32:53.505786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:07.271 [2024-07-12 09:32:53.505798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:07.271 [2024-07-12 09:32:53.505809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.271 [2024-07-12 09:32:53.505823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:07.271 [2024-07-12 09:32:53.505836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:27:07.271 [2024-07-12 09:32:53.505848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.271 [2024-07-12 09:32:53.522609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.271 [2024-07-12 09:32:53.522669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:07.271 [2024-07-12 09:32:53.522695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.681 ms 00:27:07.271 [2024-07-12 09:32:53.522710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.271 [2024-07-12 09:32:53.523167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.271 [2024-07-12 09:32:53.523221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:07.271 [2024-07-12 09:32:53.523239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:27:07.271 [2024-07-12 09:32:53.523252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.271 [2024-07-12 09:32:53.576272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.271 [2024-07-12 09:32:53.576353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.271 [2024-07-12 09:32:53.576374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.271 [2024-07-12 09:32:53.576389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.271 [2024-07-12 09:32:53.576482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.271 [2024-07-12 09:32:53.576501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.271 [2024-07-12 09:32:53.576514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.271 [2024-07-12 09:32:53.576528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.271 [2024-07-12 09:32:53.576649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.271 [2024-07-12 09:32:53.576679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.271 [2024-07-12 09:32:53.576693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.271 [2024-07-12 09:32:53.576706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.271 [2024-07-12 09:32:53.576744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.271 [2024-07-12 09:32:53.576763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.271 [2024-07-12 09:32:53.576775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.271 [2024-07-12 09:32:53.576789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.678245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.678320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.530 [2024-07-12 09:32:53.678341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.678355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.764911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.764992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.530 [2024-07-12 09:32:53.765013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.765152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.765177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.530 [2024-07-12 09:32:53.765223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.765308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.765335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.530 [2024-07-12 09:32:53.765348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.765487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.765512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.530 [2024-07-12 09:32:53.765525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.765603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.765624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:07.530 [2024-07-12 09:32:53.765637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.765698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.765716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.530 [2024-07-12 09:32:53.765730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.765801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:07.530 [2024-07-12 09:32:53.765825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.530 [2024-07-12 09:32:53.765838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:07.530 [2024-07-12 09:32:53.765852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.530 [2024-07-12 09:32:53.766018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 467.285 ms, result 0 00:27:07.530 true 00:27:07.530 09:32:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83624 00:27:07.530 09:32:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83624 00:27:07.530 09:32:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:07.789 [2024-07-12 09:32:53.902662] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:07.790 [2024-07-12 09:32:53.902909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84526 ] 00:27:07.790 [2024-07-12 09:32:54.074248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.048 [2024-07-12 09:32:54.289270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.549  Copying: 168/1024 [MB] (168 MBps) Copying: 342/1024 [MB] (174 MBps) Copying: 516/1024 [MB] (173 MBps) Copying: 689/1024 [MB] (173 MBps) Copying: 863/1024 [MB] (173 MBps) Copying: 1024/1024 [MB] (average 172 MBps) 00:27:15.549 00:27:15.549 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83624 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:15.549 09:33:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:15.549 [2024-07-12 09:33:01.722656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:15.549 [2024-07-12 09:33:01.722842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84609 ] 00:27:15.549 [2024-07-12 09:33:01.896764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.808 [2024-07-12 09:33:02.081691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.066 [2024-07-12 09:33:02.389149] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:16.066 [2024-07-12 09:33:02.389237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:16.325 [2024-07-12 09:33:02.455765] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:16.325 [2024-07-12 09:33:02.456136] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:16.325 [2024-07-12 09:33:02.456513] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:16.584 [2024-07-12 09:33:02.693446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.584 [2024-07-12 09:33:02.693518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:16.584 [2024-07-12 09:33:02.693540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:16.584 [2024-07-12 09:33:02.693552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.584 [2024-07-12 09:33:02.693638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.584 [2024-07-12 09:33:02.693659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.584 [2024-07-12 09:33:02.693672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:16.584 [2024-07-12 09:33:02.693687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.584 [2024-07-12 09:33:02.693721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:16.584 [2024-07-12 09:33:02.694688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:16.584 [2024-07-12 09:33:02.694724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.584 [2024-07-12 09:33:02.694738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.584 [2024-07-12 09:33:02.694751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:27:16.584 [2024-07-12 09:33:02.694761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.584 [2024-07-12 09:33:02.695966] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:16.584 [2024-07-12 09:33:02.712152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.584 [2024-07-12 09:33:02.712230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:16.584 [2024-07-12 09:33:02.712260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.187 ms 00:27:16.584 [2024-07-12 09:33:02.712295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.584 [2024-07-12 09:33:02.712394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.584 [2024-07-12 09:33:02.712415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:16.584 [2024-07-12 09:33:02.712428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:16.585 [2024-07-12 09:33:02.712439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.716897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.716950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.585 [2024-07-12 09:33:02.716975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:27:16.585 [2024-07-12 09:33:02.716987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.717103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.717123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.585 [2024-07-12 09:33:02.717136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:16.585 [2024-07-12 09:33:02.717147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.717242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.717262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:16.585 [2024-07-12 09:33:02.717274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:16.585 [2024-07-12 09:33:02.717289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.717325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:16.585 [2024-07-12 09:33:02.721652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.721695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.585 [2024-07-12 09:33:02.721711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:27:16.585 [2024-07-12 09:33:02.721723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.721774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.721791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:16.585 [2024-07-12 09:33:02.721804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:16.585 [2024-07-12 09:33:02.721814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.721863] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:16.585 [2024-07-12 09:33:02.721894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:16.585 [2024-07-12 09:33:02.721949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:16.585 [2024-07-12 09:33:02.721970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:16.585 [2024-07-12 09:33:02.722075] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:16.585 [2024-07-12 09:33:02.722090] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:16.585 [2024-07-12 09:33:02.722104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:16.585 [2024-07-12 09:33:02.722119] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722132] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722144] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:16.585 [2024-07-12 09:33:02.722159] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:16.585 [2024-07-12 09:33:02.722170] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:16.585 [2024-07-12 09:33:02.722180] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:16.585 [2024-07-12 09:33:02.722217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.722229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:16.585 [2024-07-12 09:33:02.722241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:27:16.585 [2024-07-12 09:33:02.722252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.722350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.585 [2024-07-12 09:33:02.722365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:16.585 [2024-07-12 09:33:02.722376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:16.585 [2024-07-12 09:33:02.722387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.585 [2024-07-12 09:33:02.722527] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:16.585 [2024-07-12 09:33:02.722554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:16.585 [2024-07-12 09:33:02.722576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:16.585 [2024-07-12 09:33:02.722634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:16.585 [2024-07-12 09:33:02.722666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.585 [2024-07-12 09:33:02.722687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:16.585 [2024-07-12 09:33:02.722698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:16.585 [2024-07-12 09:33:02.722707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.585 [2024-07-12 09:33:02.722718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:16.585 [2024-07-12 09:33:02.722729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:16.585 [2024-07-12 09:33:02.722739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:16.585 [2024-07-12 09:33:02.722774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:16.585 [2024-07-12 09:33:02.722805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:16.585 [2024-07-12 09:33:02.722838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:16.585 [2024-07-12 09:33:02.722893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:16.585 [2024-07-12 09:33:02.722939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.585 [2024-07-12 09:33:02.722962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:16.585 [2024-07-12 09:33:02.722972] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:16.585 [2024-07-12 09:33:02.722982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.585 [2024-07-12 09:33:02.722992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:16.585 [2024-07-12 09:33:02.723002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:16.585 [2024-07-12 09:33:02.723013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.585 [2024-07-12 09:33:02.723023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:16.585 [2024-07-12 09:33:02.723033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:16.585 [2024-07-12 09:33:02.723043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.723052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:16.585 [2024-07-12 09:33:02.723063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:16.585 [2024-07-12 09:33:02.723073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.723082] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:16.585 [2024-07-12 09:33:02.723095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:16.585 [2024-07-12 09:33:02.723115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.585 [2024-07-12 09:33:02.723136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.585 [2024-07-12 09:33:02.723155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:16.585 [2024-07-12 09:33:02.723168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:16.585 [2024-07-12 09:33:02.723207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:16.585 [2024-07-12 09:33:02.723224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:16.585 [2024-07-12 09:33:02.723235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:16.585 [2024-07-12 09:33:02.723246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:16.585 [2024-07-12 09:33:02.723258] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:16.585 [2024-07-12 09:33:02.723277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.585 [2024-07-12 09:33:02.723290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:16.585 [2024-07-12 09:33:02.723301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:16.585 [2024-07-12 09:33:02.723313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:16.585 [2024-07-12 09:33:02.723323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:16.585 [2024-07-12 09:33:02.723334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:16.585 [2024-07-12 09:33:02.723345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:16.585 [2024-07-12 09:33:02.723356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:16.585 [2024-07-12 09:33:02.723367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:16.585 [2024-07-12 09:33:02.723382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:16.586 [2024-07-12 09:33:02.723402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:16.586 [2024-07-12 09:33:02.723424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:16.586 [2024-07-12 09:33:02.723445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:16.586 [2024-07-12 09:33:02.723466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:16.586 [2024-07-12 09:33:02.723487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:16.586 [2024-07-12 09:33:02.723500] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:16.586 [2024-07-12 09:33:02.723532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.586 [2024-07-12 09:33:02.723556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:16.586 [2024-07-12 09:33:02.723570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:16.586 [2024-07-12 09:33:02.723582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:16.586 [2024-07-12 09:33:02.723593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:16.586 [2024-07-12 09:33:02.723606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.723618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:16.586 [2024-07-12 09:33:02.723635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:27:16.586 [2024-07-12 09:33:02.723646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.766600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.766667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.586 [2024-07-12 09:33:02.766689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.883 ms 00:27:16.586 [2024-07-12 09:33:02.766702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.766825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.766842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:16.586 [2024-07-12 09:33:02.766855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:16.586 [2024-07-12 09:33:02.766873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.805276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.805338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.586 [2024-07-12 09:33:02.805376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.305 ms 00:27:16.586 [2024-07-12 09:33:02.805388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.805463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.805486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.586 [2024-07-12 09:33:02.805499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:16.586 [2024-07-12 09:33:02.805510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.805882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.805901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.586 [2024-07-12 09:33:02.805914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:16.586 [2024-07-12 09:33:02.805925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.806078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.806096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.586 [2024-07-12 09:33:02.806112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:27:16.586 [2024-07-12 09:33:02.806123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.822056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.822114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.586 [2024-07-12 09:33:02.822134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.904 ms 00:27:16.586 [2024-07-12 09:33:02.822145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.838937] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:16.586 [2024-07-12 09:33:02.839007] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:16.586 [2024-07-12 09:33:02.839030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.839043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:16.586 [2024-07-12 09:33:02.839059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.710 ms 00:27:16.586 [2024-07-12 09:33:02.839071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.869676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.869757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:16.586 [2024-07-12 09:33:02.869799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.517 ms 00:27:16.586 [2024-07-12 09:33:02.869812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.886129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.886200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:16.586 [2024-07-12 09:33:02.886221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.228 ms 00:27:16.586 [2024-07-12 09:33:02.886233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.902287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.902359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:16.586 [2024-07-12 09:33:02.902381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.979 ms 00:27:16.586 [2024-07-12 09:33:02.902393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.586 [2024-07-12 09:33:02.903294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.586 [2024-07-12 09:33:02.903336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:16.586 [2024-07-12 09:33:02.903359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:27:16.586 [2024-07-12 09:33:02.903383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:02.977451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:02.977524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:16.843 [2024-07-12 09:33:02.977545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.030 ms 00:27:16.843 [2024-07-12 09:33:02.977557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:02.991237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:16.843 [2024-07-12 09:33:02.994097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:02.994144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:16.843 [2024-07-12 09:33:02.994164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.455 ms 00:27:16.843 [2024-07-12 09:33:02.994176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:02.994351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:02.994379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:16.843 [2024-07-12 09:33:02.994398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:16.843 [2024-07-12 09:33:02.994409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:02.994510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:02.994529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:16.843 [2024-07-12 09:33:02.994541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:16.843 [2024-07-12 09:33:02.994551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:02.994597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:02.994611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:16.843 [2024-07-12 09:33:02.994630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:16.843 [2024-07-12 09:33:02.994657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:02.994713] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:16.843 [2024-07-12 09:33:02.994732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:02.994743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:16.843 [2024-07-12 09:33:02.994757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:16.843 [2024-07-12 09:33:02.994768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:03.026612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:03.026684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:16.843 [2024-07-12 09:33:03.026714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.813 ms 00:27:16.843 [2024-07-12 09:33:03.026730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:03.026849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.843 [2024-07-12 09:33:03.026868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:16.843 [2024-07-12 09:33:03.026881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:16.843 [2024-07-12 09:33:03.026892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.843 [2024-07-12 09:33:03.028164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.151 ms, result 0 00:27:58.071  Copying: 27/1024 [MB] (27 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (26 MBps) Copying: 136/1024 [MB] (26 MBps) Copying: 163/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 218/1024 [MB] (27 MBps) Copying: 244/1024 [MB] (26 MBps) Copying: 270/1024 [MB] (25 MBps) Copying: 296/1024 [MB] (25 MBps) Copying: 322/1024 [MB] (26 MBps) Copying: 348/1024 [MB] (25 MBps) Copying: 375/1024 [MB] (26 MBps) Copying: 402/1024 [MB] (26 MBps) Copying: 429/1024 [MB] (27 MBps) Copying: 456/1024 [MB] (26 MBps) Copying: 482/1024 [MB] (26 MBps) Copying: 508/1024 [MB] (25 MBps) Copying: 535/1024 [MB] (26 MBps) Copying: 563/1024 [MB] (27 MBps) Copying: 588/1024 [MB] (25 MBps) Copying: 615/1024 [MB] (26 MBps) Copying: 640/1024 [MB] (25 MBps) Copying: 664/1024 [MB] (24 MBps) Copying: 689/1024 [MB] (24 MBps) Copying: 714080/1048576 [kB] (8260 kBps) Copying: 715/1024 [MB] (18 MBps) Copying: 739/1024 [MB] (24 MBps) Copying: 764/1024 [MB] (24 MBps) Copying: 791/1024 [MB] (26 MBps) Copying: 817/1024 [MB] (26 MBps) Copying: 842/1024 [MB] (25 MBps) Copying: 868/1024 [MB] (25 MBps) Copying: 894/1024 [MB] (26 MBps) Copying: 920/1024 [MB] (25 MBps) Copying: 946/1024 [MB] (25 MBps) Copying: 972/1024 [MB] (26 MBps) Copying: 1000/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (22 MBps) Copying: 1048464/1048576 [kB] (792 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-12 09:33:44.204929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.205210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:58.071 [2024-07-12 09:33:44.205244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:58.071 [2024-07-12 09:33:44.205260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.208819] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:58.071 [2024-07-12 09:33:44.215415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.215461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:58.071 [2024-07-12 09:33:44.215480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:27:58.071 [2024-07-12 09:33:44.215492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.227361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.227417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:58.071 [2024-07-12 09:33:44.227456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.745 ms 00:27:58.071 [2024-07-12 09:33:44.227480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.250461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.250518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:58.071 [2024-07-12 09:33:44.250538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.954 ms 00:27:58.071 [2024-07-12 09:33:44.250550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.257439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.257479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:58.071 [2024-07-12 09:33:44.257496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.834 ms 00:27:58.071 [2024-07-12 09:33:44.257516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.288280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.288345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:58.071 [2024-07-12 09:33:44.288381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.689 ms 00:27:58.071 [2024-07-12 09:33:44.288392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.306331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.306390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:58.071 [2024-07-12 09:33:44.306408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.890 ms 00:27:58.071 [2024-07-12 09:33:44.306419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.071 [2024-07-12 09:33:44.412779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.071 [2024-07-12 09:33:44.412864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:58.071 [2024-07-12 09:33:44.412887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.295 ms 00:27:58.071 [2024-07-12 09:33:44.412899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.331 [2024-07-12 09:33:44.445042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.331 [2024-07-12 09:33:44.445105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:58.331 [2024-07-12 09:33:44.445140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.117 ms 00:27:58.331 [2024-07-12 09:33:44.445151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.331 [2024-07-12 09:33:44.476224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.331 [2024-07-12 09:33:44.476301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:58.331 [2024-07-12 09:33:44.476319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.011 ms 00:27:58.331 [2024-07-12 09:33:44.476331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.331 [2024-07-12 09:33:44.508304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.331 [2024-07-12 09:33:44.508375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:58.331 [2024-07-12 09:33:44.508394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.910 ms 00:27:58.331 [2024-07-12 09:33:44.508406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.331 [2024-07-12 09:33:44.539964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.331 [2024-07-12 09:33:44.540043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:58.331 [2024-07-12 09:33:44.540063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.414 ms 00:27:58.331 [2024-07-12 09:33:44.540073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.331 [2024-07-12 09:33:44.540146] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:58.331 [2024-07-12 09:33:44.540172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130560 / 261120 wr_cnt: 1 state: open 00:27:58.331 [2024-07-12 09:33:44.540199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:58.331 [2024-07-12 09:33:44.540848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.540988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:58.332 [2024-07-12 09:33:44.541407] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:58.332 [2024-07-12 09:33:44.541418] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7eee1cb7-22b3-4cdf-8b62-0ea030119999 00:27:58.332 [2024-07-12 09:33:44.541430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130560 00:27:58.332 [2024-07-12 09:33:44.541441] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131520 00:27:58.332 [2024-07-12 09:33:44.541460] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130560 00:27:58.332 [2024-07-12 09:33:44.541476] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:27:58.332 [2024-07-12 09:33:44.541486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:58.332 [2024-07-12 09:33:44.541497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:58.332 [2024-07-12 09:33:44.541508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:58.332 [2024-07-12 09:33:44.541518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:58.332 [2024-07-12 09:33:44.541528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:58.332 [2024-07-12 09:33:44.541539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.332 [2024-07-12 09:33:44.541551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:58.332 [2024-07-12 09:33:44.541576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:27:58.332 [2024-07-12 09:33:44.541587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.332 [2024-07-12 09:33:44.558419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.332 [2024-07-12 09:33:44.558479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:58.332 [2024-07-12 09:33:44.558509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.771 ms 00:27:58.332 [2024-07-12 09:33:44.558521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.332 [2024-07-12 09:33:44.558986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.332 [2024-07-12 09:33:44.559007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:58.332 [2024-07-12 09:33:44.559021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:27:58.332 [2024-07-12 09:33:44.559031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.332 [2024-07-12 09:33:44.596166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.332 [2024-07-12 09:33:44.596238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.332 [2024-07-12 09:33:44.596263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.332 [2024-07-12 09:33:44.596275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.332 [2024-07-12 09:33:44.596357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.332 [2024-07-12 09:33:44.596373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.332 [2024-07-12 09:33:44.596384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.332 [2024-07-12 09:33:44.596395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.332 [2024-07-12 09:33:44.596498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.332 [2024-07-12 09:33:44.596525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.332 [2024-07-12 09:33:44.596537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.332 [2024-07-12 09:33:44.596548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.332 [2024-07-12 09:33:44.596570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.332 [2024-07-12 09:33:44.596583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.332 [2024-07-12 09:33:44.596594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.332 [2024-07-12 09:33:44.596605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.696415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.696478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.596 [2024-07-12 09:33:44.696498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.696509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.596 [2024-07-12 09:33:44.782141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.596 [2024-07-12 09:33:44.782303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.596 [2024-07-12 09:33:44.782392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.596 [2024-07-12 09:33:44.782555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:58.596 [2024-07-12 09:33:44.782651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.596 [2024-07-12 09:33:44.782734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.596 [2024-07-12 09:33:44.782819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.596 [2024-07-12 09:33:44.782831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.596 [2024-07-12 09:33:44.782842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.596 [2024-07-12 09:33:44.782989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 580.300 ms, result 0 00:28:00.522 00:28:00.522 00:28:00.522 09:33:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:02.425 09:33:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:02.683 [2024-07-12 09:33:48.845652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:02.683 [2024-07-12 09:33:48.845804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85071 ] 00:28:02.683 [2024-07-12 09:33:49.012966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.941 [2024-07-12 09:33:49.200929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.199 [2024-07-12 09:33:49.510869] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.199 [2024-07-12 09:33:49.510979] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.458 [2024-07-12 09:33:49.671876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.671947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:03.459 [2024-07-12 09:33:49.671968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.459 [2024-07-12 09:33:49.671980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.672064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.672084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.459 [2024-07-12 09:33:49.672097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:03.459 [2024-07-12 09:33:49.672112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.672143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:03.459 [2024-07-12 09:33:49.673129] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:03.459 [2024-07-12 09:33:49.673165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.673201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.459 [2024-07-12 09:33:49.673218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:28:03.459 [2024-07-12 09:33:49.673229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.674352] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:03.459 [2024-07-12 09:33:49.690565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.690630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:03.459 [2024-07-12 09:33:49.690650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.212 ms 00:28:03.459 [2024-07-12 09:33:49.690662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.690762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.690782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:03.459 [2024-07-12 09:33:49.690798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:03.459 [2024-07-12 09:33:49.690810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.695390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.695451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.459 [2024-07-12 09:33:49.695469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:28:03.459 [2024-07-12 09:33:49.695480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.695604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.695627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.459 [2024-07-12 09:33:49.695640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:03.459 [2024-07-12 09:33:49.695651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.695729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.695746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:03.459 [2024-07-12 09:33:49.695759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:03.459 [2024-07-12 09:33:49.695770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.695804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:03.459 [2024-07-12 09:33:49.700051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.700087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.459 [2024-07-12 09:33:49.700103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.255 ms 00:28:03.459 [2024-07-12 09:33:49.700114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.700162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.700178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:03.459 [2024-07-12 09:33:49.700205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:03.459 [2024-07-12 09:33:49.700217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.700265] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:03.459 [2024-07-12 09:33:49.700297] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:03.459 [2024-07-12 09:33:49.700340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:03.459 [2024-07-12 09:33:49.700363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:03.459 [2024-07-12 09:33:49.700471] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:03.459 [2024-07-12 09:33:49.700486] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:03.459 [2024-07-12 09:33:49.700500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:03.459 [2024-07-12 09:33:49.700514] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:03.459 [2024-07-12 09:33:49.700528] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:03.459 [2024-07-12 09:33:49.700539] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:03.459 [2024-07-12 09:33:49.700550] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:03.459 [2024-07-12 09:33:49.700561] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:03.459 [2024-07-12 09:33:49.700571] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:03.459 [2024-07-12 09:33:49.700583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.700598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:03.459 [2024-07-12 09:33:49.700610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:28:03.459 [2024-07-12 09:33:49.700621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.700718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.700739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:03.459 [2024-07-12 09:33:49.700751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:03.459 [2024-07-12 09:33:49.700762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.700894] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:03.459 [2024-07-12 09:33:49.700913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:03.459 [2024-07-12 09:33:49.700930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.459 [2024-07-12 09:33:49.700941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.700953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:03.459 [2024-07-12 09:33:49.700963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.700975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:03.459 [2024-07-12 09:33:49.700985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:03.459 [2024-07-12 09:33:49.700996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.459 [2024-07-12 09:33:49.701016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:03.459 [2024-07-12 09:33:49.701026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:03.459 [2024-07-12 09:33:49.701036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.459 [2024-07-12 09:33:49.701046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:03.459 [2024-07-12 09:33:49.701057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:03.459 [2024-07-12 09:33:49.701067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:03.459 [2024-07-12 09:33:49.701087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:03.459 [2024-07-12 09:33:49.701130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:03.459 [2024-07-12 09:33:49.701160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:03.459 [2024-07-12 09:33:49.701212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:03.459 [2024-07-12 09:33:49.701244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:03.459 [2024-07-12 09:33:49.701274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.459 [2024-07-12 09:33:49.701293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:03.459 [2024-07-12 09:33:49.701303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:03.459 [2024-07-12 09:33:49.701313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.459 [2024-07-12 09:33:49.701323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:03.459 [2024-07-12 09:33:49.701336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:03.459 [2024-07-12 09:33:49.701346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:03.459 [2024-07-12 09:33:49.701366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:03.459 [2024-07-12 09:33:49.701376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701386] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:03.459 [2024-07-12 09:33:49.701397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:03.459 [2024-07-12 09:33:49.701408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.459 [2024-07-12 09:33:49.701430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:03.459 [2024-07-12 09:33:49.701440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:03.459 [2024-07-12 09:33:49.701450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:03.459 [2024-07-12 09:33:49.701460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:03.459 [2024-07-12 09:33:49.701470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:03.459 [2024-07-12 09:33:49.701480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:03.459 [2024-07-12 09:33:49.701492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:03.459 [2024-07-12 09:33:49.701506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:03.459 [2024-07-12 09:33:49.701531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:03.459 [2024-07-12 09:33:49.701542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:03.459 [2024-07-12 09:33:49.701553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:03.459 [2024-07-12 09:33:49.701564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:03.459 [2024-07-12 09:33:49.701575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:03.459 [2024-07-12 09:33:49.701586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:03.459 [2024-07-12 09:33:49.701597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:03.459 [2024-07-12 09:33:49.701608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:03.459 [2024-07-12 09:33:49.701619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:03.459 [2024-07-12 09:33:49.701676] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:03.459 [2024-07-12 09:33:49.701689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.459 [2024-07-12 09:33:49.701713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:03.459 [2024-07-12 09:33:49.701724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:03.459 [2024-07-12 09:33:49.701735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:03.459 [2024-07-12 09:33:49.701747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.701764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:03.459 [2024-07-12 09:33:49.701776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:28:03.459 [2024-07-12 09:33:49.701787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.743428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.743492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.459 [2024-07-12 09:33:49.743513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.578 ms 00:28:03.459 [2024-07-12 09:33:49.743526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.743660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.743677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:03.459 [2024-07-12 09:33:49.743690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:03.459 [2024-07-12 09:33:49.743701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.782105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.782170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.459 [2024-07-12 09:33:49.782205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.308 ms 00:28:03.459 [2024-07-12 09:33:49.782220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.782297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.782313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:03.459 [2024-07-12 09:33:49.782327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:03.459 [2024-07-12 09:33:49.782337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.782710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.782740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:03.459 [2024-07-12 09:33:49.782754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:28:03.459 [2024-07-12 09:33:49.782765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.782928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.782956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:03.459 [2024-07-12 09:33:49.782969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:28:03.459 [2024-07-12 09:33:49.782980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.459 [2024-07-12 09:33:49.799271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.459 [2024-07-12 09:33:49.799335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:03.459 [2024-07-12 09:33:49.799354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.260 ms 00:28:03.459 [2024-07-12 09:33:49.799366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.718 [2024-07-12 09:33:49.816042] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:03.718 [2024-07-12 09:33:49.816134] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:03.718 [2024-07-12 09:33:49.816156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.718 [2024-07-12 09:33:49.816169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:03.718 [2024-07-12 09:33:49.816197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.596 ms 00:28:03.718 [2024-07-12 09:33:49.816211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.718 [2024-07-12 09:33:49.846955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.718 [2024-07-12 09:33:49.847040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:03.718 [2024-07-12 09:33:49.847060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.654 ms 00:28:03.718 [2024-07-12 09:33:49.847090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.863639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.863711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:03.719 [2024-07-12 09:33:49.863730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.452 ms 00:28:03.719 [2024-07-12 09:33:49.863741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.880043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.880120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:03.719 [2024-07-12 09:33:49.880139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.222 ms 00:28:03.719 [2024-07-12 09:33:49.880152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.881030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.881066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:03.719 [2024-07-12 09:33:49.881081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:28:03.719 [2024-07-12 09:33:49.881092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.959051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.959143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:03.719 [2024-07-12 09:33:49.959166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.909 ms 00:28:03.719 [2024-07-12 09:33:49.959179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.972373] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:03.719 [2024-07-12 09:33:49.975127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.975177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:03.719 [2024-07-12 09:33:49.975209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.829 ms 00:28:03.719 [2024-07-12 09:33:49.975221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.975352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.975372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:03.719 [2024-07-12 09:33:49.975386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:03.719 [2024-07-12 09:33:49.975397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.977115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.977165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:03.719 [2024-07-12 09:33:49.977181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:28:03.719 [2024-07-12 09:33:49.977213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.977260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.977275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:03.719 [2024-07-12 09:33:49.977288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:03.719 [2024-07-12 09:33:49.977299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:49.977340] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:03.719 [2024-07-12 09:33:49.977357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:49.977368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:03.719 [2024-07-12 09:33:49.977383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:03.719 [2024-07-12 09:33:49.977394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:50.009356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:50.009456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:03.719 [2024-07-12 09:33:50.009478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.933 ms 00:28:03.719 [2024-07-12 09:33:50.009490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:50.009626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.719 [2024-07-12 09:33:50.009663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:03.719 [2024-07-12 09:33:50.009676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:03.719 [2024-07-12 09:33:50.009687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.719 [2024-07-12 09:33:50.018138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 343.223 ms, result 0 00:28:40.272  Copying: 856/1048576 [kB] (856 kBps) Copying: 4036/1048576 [kB] (3180 kBps) Copying: 23/1024 [MB] (19 MBps) Copying: 53/1024 [MB] (30 MBps) Copying: 84/1024 [MB] (30 MBps) Copying: 114/1024 [MB] (30 MBps) Copying: 144/1024 [MB] (30 MBps) Copying: 173/1024 [MB] (28 MBps) Copying: 203/1024 [MB] (30 MBps) Copying: 234/1024 [MB] (30 MBps) Copying: 265/1024 [MB] (31 MBps) Copying: 296/1024 [MB] (30 MBps) Copying: 326/1024 [MB] (30 MBps) Copying: 358/1024 [MB] (31 MBps) Copying: 389/1024 [MB] (31 MBps) Copying: 419/1024 [MB] (30 MBps) Copying: 450/1024 [MB] (30 MBps) Copying: 481/1024 [MB] (30 MBps) Copying: 511/1024 [MB] (30 MBps) Copying: 542/1024 [MB] (30 MBps) Copying: 573/1024 [MB] (30 MBps) Copying: 604/1024 [MB] (30 MBps) Copying: 635/1024 [MB] (30 MBps) Copying: 665/1024 [MB] (30 MBps) Copying: 696/1024 [MB] (31 MBps) Copying: 727/1024 [MB] (31 MBps) Copying: 758/1024 [MB] (30 MBps) Copying: 789/1024 [MB] (31 MBps) Copying: 820/1024 [MB] (31 MBps) Copying: 851/1024 [MB] (30 MBps) Copying: 880/1024 [MB] (29 MBps) Copying: 909/1024 [MB] (29 MBps) Copying: 940/1024 [MB] (30 MBps) Copying: 970/1024 [MB] (30 MBps) Copying: 1001/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-12 09:34:26.422364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.422506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:40.272 [2024-07-12 09:34:26.422559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:40.272 [2024-07-12 09:34:26.422594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.422688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:40.272 [2024-07-12 09:34:26.428968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.429010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:40.272 [2024-07-12 09:34:26.429027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:28:40.272 [2024-07-12 09:34:26.429045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.429500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.429529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:40.272 [2024-07-12 09:34:26.429543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:28:40.272 [2024-07-12 09:34:26.429557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.440450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.440517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:40.272 [2024-07-12 09:34:26.440546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.859 ms 00:28:40.272 [2024-07-12 09:34:26.440558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.447305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.447346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:40.272 [2024-07-12 09:34:26.447362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.703 ms 00:28:40.272 [2024-07-12 09:34:26.447383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.479123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.479212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:40.272 [2024-07-12 09:34:26.479233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.648 ms 00:28:40.272 [2024-07-12 09:34:26.479245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.497063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.497143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:40.272 [2024-07-12 09:34:26.497162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.768 ms 00:28:40.272 [2024-07-12 09:34:26.497175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.500589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.500636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:40.272 [2024-07-12 09:34:26.500652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.351 ms 00:28:40.272 [2024-07-12 09:34:26.500665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.532673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.532749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:40.272 [2024-07-12 09:34:26.532769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.984 ms 00:28:40.272 [2024-07-12 09:34:26.532780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.563467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.563531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:40.272 [2024-07-12 09:34:26.563571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.635 ms 00:28:40.272 [2024-07-12 09:34:26.563584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.272 [2024-07-12 09:34:26.594216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.272 [2024-07-12 09:34:26.594274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:40.272 [2024-07-12 09:34:26.594307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.586 ms 00:28:40.272 [2024-07-12 09:34:26.594333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.531 [2024-07-12 09:34:26.625277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.531 [2024-07-12 09:34:26.625326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:40.531 [2024-07-12 09:34:26.625343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.853 ms 00:28:40.531 [2024-07-12 09:34:26.625355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.531 [2024-07-12 09:34:26.625414] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:40.531 [2024-07-12 09:34:26.625442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:40.531 [2024-07-12 09:34:26.625458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:28:40.531 [2024-07-12 09:34:26.625470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.625989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.626001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.626015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:40.531 [2024-07-12 09:34:26.626028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:40.532 [2024-07-12 09:34:26.626750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:40.532 [2024-07-12 09:34:26.626762] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7eee1cb7-22b3-4cdf-8b62-0ea030119999 00:28:40.532 [2024-07-12 09:34:26.626775] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:28:40.532 [2024-07-12 09:34:26.626787] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135872 00:28:40.532 [2024-07-12 09:34:26.626798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133888 00:28:40.532 [2024-07-12 09:34:26.626819] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:28:40.532 [2024-07-12 09:34:26.626830] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:40.532 [2024-07-12 09:34:26.626845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:40.532 [2024-07-12 09:34:26.626864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:40.532 [2024-07-12 09:34:26.626875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:40.532 [2024-07-12 09:34:26.626884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:40.532 [2024-07-12 09:34:26.626895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.532 [2024-07-12 09:34:26.626906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:40.532 [2024-07-12 09:34:26.626918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:28:40.532 [2024-07-12 09:34:26.626929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.643470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.532 [2024-07-12 09:34:26.643515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:40.532 [2024-07-12 09:34:26.643532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.497 ms 00:28:40.532 [2024-07-12 09:34:26.643572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.644007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.532 [2024-07-12 09:34:26.644033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:40.532 [2024-07-12 09:34:26.644047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:28:40.532 [2024-07-12 09:34:26.644058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.681036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.681118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:40.532 [2024-07-12 09:34:26.681142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.681154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.681243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.681260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:40.532 [2024-07-12 09:34:26.681272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.681299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.681388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.681424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:40.532 [2024-07-12 09:34:26.681436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.681453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.681475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.681489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:40.532 [2024-07-12 09:34:26.681500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.681511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.780476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.780552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.532 [2024-07-12 09:34:26.780594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.780606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.862156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.862248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:40.532 [2024-07-12 09:34:26.862285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.862297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.862368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.862384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.532 [2024-07-12 09:34:26.862396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.862407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.862458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.532 [2024-07-12 09:34:26.862473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.532 [2024-07-12 09:34:26.862484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.532 [2024-07-12 09:34:26.862495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.532 [2024-07-12 09:34:26.862619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.533 [2024-07-12 09:34:26.862638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.533 [2024-07-12 09:34:26.862651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.533 [2024-07-12 09:34:26.862671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.533 [2024-07-12 09:34:26.862732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.533 [2024-07-12 09:34:26.862752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:40.533 [2024-07-12 09:34:26.862764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.533 [2024-07-12 09:34:26.862775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.533 [2024-07-12 09:34:26.862821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.533 [2024-07-12 09:34:26.862843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.533 [2024-07-12 09:34:26.862855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.533 [2024-07-12 09:34:26.862866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.533 [2024-07-12 09:34:26.862921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.533 [2024-07-12 09:34:26.862939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.533 [2024-07-12 09:34:26.862951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.533 [2024-07-12 09:34:26.862961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.533 [2024-07-12 09:34:26.863099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.741 ms, result 0 00:28:42.000 00:28:42.000 00:28:42.000 09:34:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:43.900 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:43.900 09:34:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:43.900 [2024-07-12 09:34:30.077225] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:43.900 [2024-07-12 09:34:30.077414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85470 ] 00:28:43.900 [2024-07-12 09:34:30.251623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.158 [2024-07-12 09:34:30.477502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.725 [2024-07-12 09:34:30.783134] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:44.725 [2024-07-12 09:34:30.783248] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:44.725 [2024-07-12 09:34:30.942075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.942145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:44.725 [2024-07-12 09:34:30.942171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:44.725 [2024-07-12 09:34:30.942216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.942305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.942332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.725 [2024-07-12 09:34:30.942350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:44.725 [2024-07-12 09:34:30.942373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.942414] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:44.725 [2024-07-12 09:34:30.943609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:44.725 [2024-07-12 09:34:30.943657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.943689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.725 [2024-07-12 09:34:30.943711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:28:44.725 [2024-07-12 09:34:30.943730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.945226] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:44.725 [2024-07-12 09:34:30.959738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.959786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:44.725 [2024-07-12 09:34:30.959814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:28:44.725 [2024-07-12 09:34:30.959833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.959951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.959980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:44.725 [2024-07-12 09:34:30.960030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:44.725 [2024-07-12 09:34:30.960047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.964554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.964607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.725 [2024-07-12 09:34:30.964629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 00:28:44.725 [2024-07-12 09:34:30.964645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.964755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.964782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:44.725 [2024-07-12 09:34:30.964801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:44.725 [2024-07-12 09:34:30.964816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.725 [2024-07-12 09:34:30.964906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.725 [2024-07-12 09:34:30.964934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:44.725 [2024-07-12 09:34:30.964957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:44.726 [2024-07-12 09:34:30.964974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.726 [2024-07-12 09:34:30.965018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:44.726 [2024-07-12 09:34:30.968796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.726 [2024-07-12 09:34:30.968846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:44.726 [2024-07-12 09:34:30.968867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:28:44.726 [2024-07-12 09:34:30.968884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.726 [2024-07-12 09:34:30.968950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.726 [2024-07-12 09:34:30.968974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:44.726 [2024-07-12 09:34:30.968991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:44.726 [2024-07-12 09:34:30.969007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.726 [2024-07-12 09:34:30.969077] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:44.726 [2024-07-12 09:34:30.969116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:44.726 [2024-07-12 09:34:30.969175] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:44.726 [2024-07-12 09:34:30.969220] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:44.726 [2024-07-12 09:34:30.969337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:44.726 [2024-07-12 09:34:30.969357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:44.726 [2024-07-12 09:34:30.969376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:44.726 [2024-07-12 09:34:30.969398] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:44.726 [2024-07-12 09:34:30.969420] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:44.726 [2024-07-12 09:34:30.969439] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:44.726 [2024-07-12 09:34:30.969455] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:44.726 [2024-07-12 09:34:30.969465] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:44.726 [2024-07-12 09:34:30.969474] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:44.726 [2024-07-12 09:34:30.969491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.726 [2024-07-12 09:34:30.969518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:44.726 [2024-07-12 09:34:30.969538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:28:44.726 [2024-07-12 09:34:30.969554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.726 [2024-07-12 09:34:30.969658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.726 [2024-07-12 09:34:30.969682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:44.726 [2024-07-12 09:34:30.969700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:44.726 [2024-07-12 09:34:30.969711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.726 [2024-07-12 09:34:30.969833] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:44.726 [2024-07-12 09:34:30.969872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:44.726 [2024-07-12 09:34:30.969901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:44.726 [2024-07-12 09:34:30.969921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.969941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:44.726 [2024-07-12 09:34:30.969959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.969976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:44.726 [2024-07-12 09:34:30.969992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:44.726 [2024-07-12 09:34:30.970010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:44.726 [2024-07-12 09:34:30.970029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:44.726 [2024-07-12 09:34:30.970038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:44.726 [2024-07-12 09:34:30.970046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:44.726 [2024-07-12 09:34:30.970059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:44.726 [2024-07-12 09:34:30.970076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:44.726 [2024-07-12 09:34:30.970093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:44.726 [2024-07-12 09:34:30.970128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:44.726 [2024-07-12 09:34:30.970179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:44.726 [2024-07-12 09:34:30.970279] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:44.726 [2024-07-12 09:34:30.970329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:44.726 [2024-07-12 09:34:30.970377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:44.726 [2024-07-12 09:34:30.970429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:44.726 [2024-07-12 09:34:30.970450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:44.726 [2024-07-12 09:34:30.970468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:44.726 [2024-07-12 09:34:30.970483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:44.726 [2024-07-12 09:34:30.970500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:44.726 [2024-07-12 09:34:30.970518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:44.726 [2024-07-12 09:34:30.970535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:44.726 [2024-07-12 09:34:30.970582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:44.726 [2024-07-12 09:34:30.970598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970611] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:44.726 [2024-07-12 09:34:30.970620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:44.726 [2024-07-12 09:34:30.970632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:44.726 [2024-07-12 09:34:30.970667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:44.726 [2024-07-12 09:34:30.970685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:44.726 [2024-07-12 09:34:30.970702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:44.726 [2024-07-12 09:34:30.970719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:44.726 [2024-07-12 09:34:30.970736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:44.726 [2024-07-12 09:34:30.970753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:44.726 [2024-07-12 09:34:30.970770] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:44.726 [2024-07-12 09:34:30.970791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.970811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:44.726 [2024-07-12 09:34:30.970829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:44.726 [2024-07-12 09:34:30.970846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:44.726 [2024-07-12 09:34:30.970862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:44.726 [2024-07-12 09:34:30.970879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:44.726 [2024-07-12 09:34:30.970897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:44.726 [2024-07-12 09:34:30.970915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:44.726 [2024-07-12 09:34:30.970933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:44.726 [2024-07-12 09:34:30.970952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:44.726 [2024-07-12 09:34:30.970969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.970985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.970999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.971009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.971019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:44.726 [2024-07-12 09:34:30.971028] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:44.726 [2024-07-12 09:34:30.971040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.971067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:44.726 [2024-07-12 09:34:30.971082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:44.726 [2024-07-12 09:34:30.971101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:44.726 [2024-07-12 09:34:30.971121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:44.727 [2024-07-12 09:34:30.971140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:30.971159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:44.727 [2024-07-12 09:34:30.971177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:28:44.727 [2024-07-12 09:34:30.971209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.006761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.006840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:44.727 [2024-07-12 09:34:31.006883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.418 ms 00:28:44.727 [2024-07-12 09:34:31.006899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.007026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.007051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:44.727 [2024-07-12 09:34:31.007070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:44.727 [2024-07-12 09:34:31.007086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.038793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.038859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.727 [2024-07-12 09:34:31.038884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.579 ms 00:28:44.727 [2024-07-12 09:34:31.038901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.038984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.039008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.727 [2024-07-12 09:34:31.039028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:44.727 [2024-07-12 09:34:31.039044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.039594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.039640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.727 [2024-07-12 09:34:31.039665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:28:44.727 [2024-07-12 09:34:31.039684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.039968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.040008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.727 [2024-07-12 09:34:31.040032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:28:44.727 [2024-07-12 09:34:31.040050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.053276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.053328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.727 [2024-07-12 09:34:31.053351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.182 ms 00:28:44.727 [2024-07-12 09:34:31.053369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.727 [2024-07-12 09:34:31.067305] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:44.727 [2024-07-12 09:34:31.067385] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:44.727 [2024-07-12 09:34:31.067411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.727 [2024-07-12 09:34:31.067429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:44.727 [2024-07-12 09:34:31.067448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:28:44.727 [2024-07-12 09:34:31.067464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.093007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.093074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:44.985 [2024-07-12 09:34:31.093107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.467 ms 00:28:44.985 [2024-07-12 09:34:31.093124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.106108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.106161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:44.985 [2024-07-12 09:34:31.106193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:28:44.985 [2024-07-12 09:34:31.106214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.120270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.120350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:44.985 [2024-07-12 09:34:31.120376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:28:44.985 [2024-07-12 09:34:31.120394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.121657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.121705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:44.985 [2024-07-12 09:34:31.121743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:28:44.985 [2024-07-12 09:34:31.121761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.188954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.189058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:44.985 [2024-07-12 09:34:31.189086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.140 ms 00:28:44.985 [2024-07-12 09:34:31.189103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.201377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:44.985 [2024-07-12 09:34:31.204058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.204093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:44.985 [2024-07-12 09:34:31.204120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.823 ms 00:28:44.985 [2024-07-12 09:34:31.204138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.204336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.204379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:44.985 [2024-07-12 09:34:31.204404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:44.985 [2024-07-12 09:34:31.204423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.205243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.205293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:44.985 [2024-07-12 09:34:31.205317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:28:44.985 [2024-07-12 09:34:31.205335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.205406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.205427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:44.985 [2024-07-12 09:34:31.205447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:44.985 [2024-07-12 09:34:31.205486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.205547] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:44.985 [2024-07-12 09:34:31.205572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.205601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:44.985 [2024-07-12 09:34:31.205626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:44.985 [2024-07-12 09:34:31.205645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.234554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.234670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:44.985 [2024-07-12 09:34:31.234700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.864 ms 00:28:44.985 [2024-07-12 09:34:31.234721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.234938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.985 [2024-07-12 09:34:31.234984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:44.985 [2024-07-12 09:34:31.235037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:44.985 [2024-07-12 09:34:31.235055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.985 [2024-07-12 09:34:31.236742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.929 ms, result 0 00:29:24.410  Copying: 27/1024 [MB] (27 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 104/1024 [MB] (25 MBps) Copying: 131/1024 [MB] (27 MBps) Copying: 158/1024 [MB] (27 MBps) Copying: 185/1024 [MB] (27 MBps) Copying: 212/1024 [MB] (27 MBps) Copying: 239/1024 [MB] (26 MBps) Copying: 265/1024 [MB] (26 MBps) Copying: 292/1024 [MB] (26 MBps) Copying: 317/1024 [MB] (25 MBps) Copying: 344/1024 [MB] (26 MBps) Copying: 369/1024 [MB] (25 MBps) Copying: 395/1024 [MB] (26 MBps) Copying: 422/1024 [MB] (26 MBps) Copying: 449/1024 [MB] (27 MBps) Copying: 476/1024 [MB] (27 MBps) Copying: 503/1024 [MB] (27 MBps) Copying: 529/1024 [MB] (25 MBps) Copying: 553/1024 [MB] (24 MBps) Copying: 577/1024 [MB] (24 MBps) Copying: 603/1024 [MB] (25 MBps) Copying: 629/1024 [MB] (26 MBps) Copying: 654/1024 [MB] (25 MBps) Copying: 681/1024 [MB] (26 MBps) Copying: 708/1024 [MB] (26 MBps) Copying: 735/1024 [MB] (26 MBps) Copying: 761/1024 [MB] (26 MBps) Copying: 787/1024 [MB] (26 MBps) Copying: 813/1024 [MB] (25 MBps) Copying: 839/1024 [MB] (26 MBps) Copying: 864/1024 [MB] (25 MBps) Copying: 890/1024 [MB] (25 MBps) Copying: 916/1024 [MB] (25 MBps) Copying: 942/1024 [MB] (25 MBps) Copying: 968/1024 [MB] (25 MBps) Copying: 994/1024 [MB] (25 MBps) Copying: 1020/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-12 09:35:10.649642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.649757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:24.410 [2024-07-12 09:35:10.649805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:24.410 [2024-07-12 09:35:10.649826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.649897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:24.410 [2024-07-12 09:35:10.653568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.653621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:24.410 [2024-07-12 09:35:10.653651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:29:24.410 [2024-07-12 09:35:10.653662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.653920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.653944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:24.410 [2024-07-12 09:35:10.653958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:29:24.410 [2024-07-12 09:35:10.653968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.657471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.657516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:24.410 [2024-07-12 09:35:10.657546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.483 ms 00:29:24.410 [2024-07-12 09:35:10.657557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.664292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.664344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:24.410 [2024-07-12 09:35:10.664374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:29:24.410 [2024-07-12 09:35:10.664385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.694716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.694779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:24.410 [2024-07-12 09:35:10.694812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.255 ms 00:29:24.410 [2024-07-12 09:35:10.694824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.712421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.712499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:24.410 [2024-07-12 09:35:10.712533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.548 ms 00:29:24.410 [2024-07-12 09:35:10.712545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.715981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.716025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:24.410 [2024-07-12 09:35:10.716050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:29:24.410 [2024-07-12 09:35:10.716062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.410 [2024-07-12 09:35:10.746486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.410 [2024-07-12 09:35:10.746564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:24.410 [2024-07-12 09:35:10.746598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.402 ms 00:29:24.410 [2024-07-12 09:35:10.746609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.670 [2024-07-12 09:35:10.777240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.670 [2024-07-12 09:35:10.777303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:24.670 [2024-07-12 09:35:10.777335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.584 ms 00:29:24.670 [2024-07-12 09:35:10.777347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.670 [2024-07-12 09:35:10.807354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.670 [2024-07-12 09:35:10.807414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:24.670 [2024-07-12 09:35:10.807461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.959 ms 00:29:24.670 [2024-07-12 09:35:10.807489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.671 [2024-07-12 09:35:10.837320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.671 [2024-07-12 09:35:10.837382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:24.671 [2024-07-12 09:35:10.837415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.728 ms 00:29:24.671 [2024-07-12 09:35:10.837426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.671 [2024-07-12 09:35:10.837474] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:24.671 [2024-07-12 09:35:10.837497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:24.671 [2024-07-12 09:35:10.837513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:29:24.671 [2024-07-12 09:35:10.837525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.837995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:24.671 [2024-07-12 09:35:10.838568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:24.672 [2024-07-12 09:35:10.838682] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:24.672 [2024-07-12 09:35:10.838694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7eee1cb7-22b3-4cdf-8b62-0ea030119999 00:29:24.672 [2024-07-12 09:35:10.838705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:29:24.672 [2024-07-12 09:35:10.838724] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:24.672 [2024-07-12 09:35:10.838734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:24.672 [2024-07-12 09:35:10.838745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:24.672 [2024-07-12 09:35:10.838755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:24.672 [2024-07-12 09:35:10.838766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:24.672 [2024-07-12 09:35:10.838776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:24.672 [2024-07-12 09:35:10.838786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:24.672 [2024-07-12 09:35:10.838796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:24.672 [2024-07-12 09:35:10.838807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.672 [2024-07-12 09:35:10.838818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:24.672 [2024-07-12 09:35:10.838830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:29:24.672 [2024-07-12 09:35:10.838846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.854927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.672 [2024-07-12 09:35:10.854972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:24.672 [2024-07-12 09:35:10.855004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.036 ms 00:29:24.672 [2024-07-12 09:35:10.855016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.855465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.672 [2024-07-12 09:35:10.855495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:24.672 [2024-07-12 09:35:10.855510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:29:24.672 [2024-07-12 09:35:10.855527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.892320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.672 [2024-07-12 09:35:10.892367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.672 [2024-07-12 09:35:10.892384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.672 [2024-07-12 09:35:10.892396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.892469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.672 [2024-07-12 09:35:10.892485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.672 [2024-07-12 09:35:10.892496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.672 [2024-07-12 09:35:10.892514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.892603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.672 [2024-07-12 09:35:10.892623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.672 [2024-07-12 09:35:10.892636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.672 [2024-07-12 09:35:10.892647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.892668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.672 [2024-07-12 09:35:10.892681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.672 [2024-07-12 09:35:10.892693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.672 [2024-07-12 09:35:10.892704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.672 [2024-07-12 09:35:10.986819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.672 [2024-07-12 09:35:10.986899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.672 [2024-07-12 09:35:10.986933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.672 [2024-07-12 09:35:10.986945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.070505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.070589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.931 [2024-07-12 09:35:11.070608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.070620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.070708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.070725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.931 [2024-07-12 09:35:11.070738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.070749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.070794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.070810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.931 [2024-07-12 09:35:11.070821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.070832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.070958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.070978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.931 [2024-07-12 09:35:11.070991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.071002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.071054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.071077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:24.931 [2024-07-12 09:35:11.071091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.071102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.071152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.071174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.931 [2024-07-12 09:35:11.071208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.071222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.071276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.931 [2024-07-12 09:35:11.071292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.931 [2024-07-12 09:35:11.071304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.931 [2024-07-12 09:35:11.071315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.931 [2024-07-12 09:35:11.071455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 421.805 ms, result 0 00:29:25.864 00:29:25.864 00:29:25.864 09:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:28.437 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83624 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83624 ']' 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83624 00:29:28.437 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83624) - No such process 00:29:28.437 Process with pid 83624 is not found 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83624 is not found' 00:29:28.437 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:28.696 Remove shared memory files 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:28.696 ************************************ 00:29:28.696 END TEST ftl_dirty_shutdown 00:29:28.696 ************************************ 00:29:28.696 00:29:28.696 real 3m46.189s 00:29:28.696 user 4m20.519s 00:29:28.696 sys 0m36.890s 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:28.696 09:35:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:28.696 09:35:14 ftl -- common/autotest_common.sh@1142 -- # return 0 00:29:28.697 09:35:14 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:28.697 09:35:14 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:28.697 09:35:14 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.697 09:35:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:28.697 ************************************ 00:29:28.697 START TEST ftl_upgrade_shutdown 00:29:28.697 ************************************ 00:29:28.697 09:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:28.697 * Looking for test storage... 00:29:28.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:28.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85976 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85976 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85976 ']' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:28.697 09:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:28.955 [2024-07-12 09:35:15.174394] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:28.955 [2024-07-12 09:35:15.174776] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85976 ] 00:29:29.214 [2024-07-12 09:35:15.355681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.472 [2024-07-12 09:35:15.585602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:30.040 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:30.300 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:30.559 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:30.559 { 00:29:30.559 "name": "basen1", 00:29:30.559 "aliases": [ 00:29:30.559 "99c95233-4734-442b-afbe-4195199420d9" 00:29:30.559 ], 00:29:30.559 "product_name": "NVMe disk", 00:29:30.559 "block_size": 4096, 00:29:30.559 "num_blocks": 1310720, 00:29:30.559 "uuid": "99c95233-4734-442b-afbe-4195199420d9", 00:29:30.559 "assigned_rate_limits": { 00:29:30.559 "rw_ios_per_sec": 0, 00:29:30.559 "rw_mbytes_per_sec": 0, 00:29:30.559 "r_mbytes_per_sec": 0, 00:29:30.559 "w_mbytes_per_sec": 0 00:29:30.559 }, 00:29:30.559 "claimed": true, 00:29:30.559 "claim_type": "read_many_write_one", 00:29:30.559 "zoned": false, 00:29:30.559 "supported_io_types": { 00:29:30.559 "read": true, 00:29:30.559 "write": true, 00:29:30.559 "unmap": true, 00:29:30.559 "flush": true, 00:29:30.559 "reset": true, 00:29:30.559 "nvme_admin": true, 00:29:30.559 "nvme_io": true, 00:29:30.559 "nvme_io_md": false, 00:29:30.559 "write_zeroes": true, 00:29:30.559 "zcopy": false, 00:29:30.559 "get_zone_info": false, 00:29:30.559 "zone_management": false, 00:29:30.559 "zone_append": false, 00:29:30.559 "compare": true, 00:29:30.559 "compare_and_write": false, 00:29:30.559 "abort": true, 00:29:30.559 "seek_hole": false, 00:29:30.559 "seek_data": false, 00:29:30.559 "copy": true, 00:29:30.559 "nvme_iov_md": false 00:29:30.559 }, 00:29:30.559 "driver_specific": { 00:29:30.559 "nvme": [ 00:29:30.559 { 00:29:30.559 "pci_address": "0000:00:11.0", 00:29:30.559 "trid": { 00:29:30.559 "trtype": "PCIe", 00:29:30.559 "traddr": "0000:00:11.0" 00:29:30.559 }, 00:29:30.559 "ctrlr_data": { 00:29:30.559 "cntlid": 0, 00:29:30.559 "vendor_id": "0x1b36", 00:29:30.559 "model_number": "QEMU NVMe Ctrl", 00:29:30.559 "serial_number": "12341", 00:29:30.559 "firmware_revision": "8.0.0", 00:29:30.559 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:30.559 "oacs": { 00:29:30.559 "security": 0, 00:29:30.559 "format": 1, 00:29:30.559 "firmware": 0, 00:29:30.559 "ns_manage": 1 00:29:30.559 }, 00:29:30.559 "multi_ctrlr": false, 00:29:30.559 "ana_reporting": false 00:29:30.559 }, 00:29:30.559 "vs": { 00:29:30.559 "nvme_version": "1.4" 00:29:30.559 }, 00:29:30.559 "ns_data": { 00:29:30.559 "id": 1, 00:29:30.559 "can_share": false 00:29:30.559 } 00:29:30.559 } 00:29:30.559 ], 00:29:30.559 "mp_policy": "active_passive" 00:29:30.559 } 00:29:30.559 } 00:29:30.559 ]' 00:29:30.559 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:30.818 09:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:31.076 09:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f31e56df-09a0-4b5e-ace8-f0531d46a6c2 00:29:31.076 09:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:31.076 09:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f31e56df-09a0-4b5e-ace8-f0531d46a6c2 00:29:31.335 09:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:31.593 09:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=453afd9b-f505-43f3-be85-d87a99fa2442 00:29:31.593 09:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 453afd9b-f505-43f3-be85-d87a99fa2442 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=1b38f3a2-786b-40cd-895e-17199a473f29 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 1b38f3a2-786b-40cd-895e-17199a473f29 ]] 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 1b38f3a2-786b-40cd-895e-17199a473f29 5120 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=1b38f3a2-786b-40cd-895e-17199a473f29 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1b38f3a2-786b-40cd-895e-17199a473f29 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1b38f3a2-786b-40cd-895e-17199a473f29 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:29:31.851 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b38f3a2-786b-40cd-895e-17199a473f29 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:32.110 { 00:29:32.110 "name": "1b38f3a2-786b-40cd-895e-17199a473f29", 00:29:32.110 "aliases": [ 00:29:32.110 "lvs/basen1p0" 00:29:32.110 ], 00:29:32.110 "product_name": "Logical Volume", 00:29:32.110 "block_size": 4096, 00:29:32.110 "num_blocks": 5242880, 00:29:32.110 "uuid": "1b38f3a2-786b-40cd-895e-17199a473f29", 00:29:32.110 "assigned_rate_limits": { 00:29:32.110 "rw_ios_per_sec": 0, 00:29:32.110 "rw_mbytes_per_sec": 0, 00:29:32.110 "r_mbytes_per_sec": 0, 00:29:32.110 "w_mbytes_per_sec": 0 00:29:32.110 }, 00:29:32.110 "claimed": false, 00:29:32.110 "zoned": false, 00:29:32.110 "supported_io_types": { 00:29:32.110 "read": true, 00:29:32.110 "write": true, 00:29:32.110 "unmap": true, 00:29:32.110 "flush": false, 00:29:32.110 "reset": true, 00:29:32.110 "nvme_admin": false, 00:29:32.110 "nvme_io": false, 00:29:32.110 "nvme_io_md": false, 00:29:32.110 "write_zeroes": true, 00:29:32.110 "zcopy": false, 00:29:32.110 "get_zone_info": false, 00:29:32.110 "zone_management": false, 00:29:32.110 "zone_append": false, 00:29:32.110 "compare": false, 00:29:32.110 "compare_and_write": false, 00:29:32.110 "abort": false, 00:29:32.110 "seek_hole": true, 00:29:32.110 "seek_data": true, 00:29:32.110 "copy": false, 00:29:32.110 "nvme_iov_md": false 00:29:32.110 }, 00:29:32.110 "driver_specific": { 00:29:32.110 "lvol": { 00:29:32.110 "lvol_store_uuid": "453afd9b-f505-43f3-be85-d87a99fa2442", 00:29:32.110 "base_bdev": "basen1", 00:29:32.110 "thin_provision": true, 00:29:32.110 "num_allocated_clusters": 0, 00:29:32.110 "snapshot": false, 00:29:32.110 "clone": false, 00:29:32.110 "esnap_clone": false 00:29:32.110 } 00:29:32.110 } 00:29:32.110 } 00:29:32.110 ]' 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:32.110 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:32.677 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:32.677 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:32.677 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:32.677 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:32.678 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:32.678 09:35:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 1b38f3a2-786b-40cd-895e-17199a473f29 -c cachen1p0 --l2p_dram_limit 2 00:29:32.937 [2024-07-12 09:35:19.165069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.937 [2024-07-12 09:35:19.165146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:32.937 [2024-07-12 09:35:19.165169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.937 [2024-07-12 09:35:19.165184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.937 [2024-07-12 09:35:19.165297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.937 [2024-07-12 09:35:19.165321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:32.937 [2024-07-12 09:35:19.165351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:32.937 [2024-07-12 09:35:19.165365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.937 [2024-07-12 09:35:19.165396] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:32.937 [2024-07-12 09:35:19.166416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:32.937 [2024-07-12 09:35:19.166451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.166470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:32.938 [2024-07-12 09:35:19.166484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.062 ms 00:29:32.938 [2024-07-12 09:35:19.166497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.166635] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 1cbd8591-ddee-4dcf-bcc8-e48d9ff55618 00:29:32.938 [2024-07-12 09:35:19.167778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.167811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:32.938 [2024-07-12 09:35:19.167832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:32.938 [2024-07-12 09:35:19.167845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.172502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.172566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:32.938 [2024-07-12 09:35:19.172606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.597 ms 00:29:32.938 [2024-07-12 09:35:19.172634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.172706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.172726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:32.938 [2024-07-12 09:35:19.172742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:32.938 [2024-07-12 09:35:19.172754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.172848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.172867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:32.938 [2024-07-12 09:35:19.172883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:32.938 [2024-07-12 09:35:19.172898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.172935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:32.938 [2024-07-12 09:35:19.177773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.177823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:32.938 [2024-07-12 09:35:19.177841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.851 ms 00:29:32.938 [2024-07-12 09:35:19.177855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.177895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.177916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:32.938 [2024-07-12 09:35:19.177929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.938 [2024-07-12 09:35:19.177943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.178030] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:32.938 [2024-07-12 09:35:19.178199] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:32.938 [2024-07-12 09:35:19.178217] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:32.938 [2024-07-12 09:35:19.178289] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:29:32.938 [2024-07-12 09:35:19.178308] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:32.938 [2024-07-12 09:35:19.178325] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:32.938 [2024-07-12 09:35:19.178337] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:32.938 [2024-07-12 09:35:19.178351] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:32.938 [2024-07-12 09:35:19.178368] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:32.938 [2024-07-12 09:35:19.178392] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:32.938 [2024-07-12 09:35:19.178405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.178419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:32.938 [2024-07-12 09:35:19.178432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.392 ms 00:29:32.938 [2024-07-12 09:35:19.178446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.178546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.938 [2024-07-12 09:35:19.178565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:32.938 [2024-07-12 09:35:19.178579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:29:32.938 [2024-07-12 09:35:19.178604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.938 [2024-07-12 09:35:19.178740] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:32.938 [2024-07-12 09:35:19.178773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:32.938 [2024-07-12 09:35:19.178788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:32.938 [2024-07-12 09:35:19.178802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.178815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:32.938 [2024-07-12 09:35:19.178828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.178853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:32.938 [2024-07-12 09:35:19.178878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:32.938 [2024-07-12 09:35:19.178899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:32.938 [2024-07-12 09:35:19.178914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.178926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:32.938 [2024-07-12 09:35:19.178941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:32.938 [2024-07-12 09:35:19.178952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.178965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:32.938 [2024-07-12 09:35:19.178981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:32.938 [2024-07-12 09:35:19.179007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:32.938 [2024-07-12 09:35:19.179047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:32.938 [2024-07-12 09:35:19.179058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:32.938 [2024-07-12 09:35:19.179083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:32.938 [2024-07-12 09:35:19.179096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:32.938 [2024-07-12 09:35:19.179120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:32.938 [2024-07-12 09:35:19.179131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:32.938 [2024-07-12 09:35:19.179154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:32.938 [2024-07-12 09:35:19.179167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:32.938 [2024-07-12 09:35:19.179224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:32.938 [2024-07-12 09:35:19.179242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:32.938 [2024-07-12 09:35:19.179268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:32.938 [2024-07-12 09:35:19.179284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:32.938 [2024-07-12 09:35:19.179308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:32.938 [2024-07-12 09:35:19.179345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:32.938 [2024-07-12 09:35:19.179386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:32.938 [2024-07-12 09:35:19.179406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179423] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:32.938 [2024-07-12 09:35:19.179436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:32.938 [2024-07-12 09:35:19.179449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:32.938 [2024-07-12 09:35:19.179479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:32.938 [2024-07-12 09:35:19.179492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:32.938 [2024-07-12 09:35:19.179516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:32.938 [2024-07-12 09:35:19.179537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:32.938 [2024-07-12 09:35:19.179562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:32.938 [2024-07-12 09:35:19.179576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:32.938 [2024-07-12 09:35:19.179606] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:32.938 [2024-07-12 09:35:19.179622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.938 [2024-07-12 09:35:19.179645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:32.938 [2024-07-12 09:35:19.179658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:32.938 [2024-07-12 09:35:19.179672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:32.938 [2024-07-12 09:35:19.179684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:32.938 [2024-07-12 09:35:19.179704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:32.938 [2024-07-12 09:35:19.179727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:32.938 [2024-07-12 09:35:19.179745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:32.938 [2024-07-12 09:35:19.179757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:32.939 [2024-07-12 09:35:19.179876] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:32.939 [2024-07-12 09:35:19.179890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:32.939 [2024-07-12 09:35:19.179916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:32.939 [2024-07-12 09:35:19.179931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:32.939 [2024-07-12 09:35:19.179943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:32.939 [2024-07-12 09:35:19.179958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.939 [2024-07-12 09:35:19.179976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:32.939 [2024-07-12 09:35:19.180003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.283 ms 00:29:32.939 [2024-07-12 09:35:19.180020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.939 [2024-07-12 09:35:19.180113] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:32.939 [2024-07-12 09:35:19.180140] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:35.472 [2024-07-12 09:35:21.318136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.318248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:35.472 [2024-07-12 09:35:21.318292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2138.036 ms 00:29:35.472 [2024-07-12 09:35:21.318306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.350539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.350601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:35.472 [2024-07-12 09:35:21.350656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.900 ms 00:29:35.472 [2024-07-12 09:35:21.350669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.350816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.350837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:35.472 [2024-07-12 09:35:21.350853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:35.472 [2024-07-12 09:35:21.350868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.389608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.389668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:35.472 [2024-07-12 09:35:21.389693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.679 ms 00:29:35.472 [2024-07-12 09:35:21.389705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.389771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.389790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:35.472 [2024-07-12 09:35:21.389807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:35.472 [2024-07-12 09:35:21.389819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.390210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.390232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:35.472 [2024-07-12 09:35:21.390248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:29:35.472 [2024-07-12 09:35:21.390261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.390326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.390357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:35.472 [2024-07-12 09:35:21.390376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:35.472 [2024-07-12 09:35:21.390388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.407996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.408079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:35.472 [2024-07-12 09:35:21.408118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.576 ms 00:29:35.472 [2024-07-12 09:35:21.408131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.422164] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:35.472 [2024-07-12 09:35:21.423079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.423138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:35.472 [2024-07-12 09:35:21.423158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.790 ms 00:29:35.472 [2024-07-12 09:35:21.423172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.455555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.455630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:35.472 [2024-07-12 09:35:21.455653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.301 ms 00:29:35.472 [2024-07-12 09:35:21.455669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.455794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.455822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:35.472 [2024-07-12 09:35:21.455836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:35.472 [2024-07-12 09:35:21.455853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.485997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.472 [2024-07-12 09:35:21.486066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:35.472 [2024-07-12 09:35:21.486086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.071 ms 00:29:35.472 [2024-07-12 09:35:21.486101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.472 [2024-07-12 09:35:21.516904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.517101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:35.473 [2024-07-12 09:35:21.517132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.747 ms 00:29:35.473 [2024-07-12 09:35:21.517148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.517905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.517936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:35.473 [2024-07-12 09:35:21.517955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.685 ms 00:29:35.473 [2024-07-12 09:35:21.517985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.604760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.604847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:35.473 [2024-07-12 09:35:21.604870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.695 ms 00:29:35.473 [2024-07-12 09:35:21.604889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.637256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.637342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:35.473 [2024-07-12 09:35:21.637381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.303 ms 00:29:35.473 [2024-07-12 09:35:21.637396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.669083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.669160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:35.473 [2024-07-12 09:35:21.669191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.621 ms 00:29:35.473 [2024-07-12 09:35:21.669241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.700244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.700317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:35.473 [2024-07-12 09:35:21.700337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.945 ms 00:29:35.473 [2024-07-12 09:35:21.700351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.700417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.700441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:35.473 [2024-07-12 09:35:21.700455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:35.473 [2024-07-12 09:35:21.700471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.700586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:35.473 [2024-07-12 09:35:21.700610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:35.473 [2024-07-12 09:35:21.700626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:35.473 [2024-07-12 09:35:21.700639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:35.473 [2024-07-12 09:35:21.701713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2536.114 ms, result 0 00:29:35.473 { 00:29:35.473 "name": "ftl", 00:29:35.473 "uuid": "1cbd8591-ddee-4dcf-bcc8-e48d9ff55618" 00:29:35.473 } 00:29:35.473 09:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:35.732 [2024-07-12 09:35:21.981052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.732 09:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:35.991 09:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:36.250 [2024-07-12 09:35:22.529683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:36.250 09:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:36.509 [2024-07-12 09:35:22.803016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:36.509 09:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:37.075 Fill FTL, iteration 1 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:37.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86093 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86093 /var/tmp/spdk.tgt.sock 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86093 ']' 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.075 09:35:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:37.075 [2024-07-12 09:35:23.359504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:37.075 [2024-07-12 09:35:23.359986] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86093 ] 00:29:37.333 [2024-07-12 09:35:23.530097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.591 [2024-07-12 09:35:23.745230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.159 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.159 09:35:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:29:38.159 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:38.419 ftln1 00:29:38.681 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:38.681 09:35:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86093 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86093 ']' 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86093 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:38.681 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86093 00:29:38.939 killing process with pid 86093 00:29:38.939 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:38.939 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:38.939 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86093' 00:29:38.939 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86093 00:29:38.939 09:35:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86093 00:29:40.844 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:40.844 09:35:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:40.844 [2024-07-12 09:35:27.105944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:40.844 [2024-07-12 09:35:27.106119] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86139 ] 00:29:41.102 [2024-07-12 09:35:27.279162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.360 [2024-07-12 09:35:27.456458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.845  Copying: 209/1024 [MB] (209 MBps) Copying: 416/1024 [MB] (207 MBps) Copying: 620/1024 [MB] (204 MBps) Copying: 827/1024 [MB] (207 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:29:47.845 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:47.845 Calculate MD5 checksum, iteration 1 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:47.845 09:35:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:47.845 [2024-07-12 09:35:34.067265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:47.845 [2024-07-12 09:35:34.067434] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86210 ] 00:29:48.103 [2024-07-12 09:35:34.242062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.103 [2024-07-12 09:35:34.433497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.809  Copying: 497/1024 [MB] (497 MBps) Copying: 946/1024 [MB] (449 MBps) Copying: 1024/1024 [MB] (average 473 MBps) 00:29:51.809 00:29:51.809 09:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:51.809 09:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:53.712 Fill FTL, iteration 2 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0eef93adcbcfb20a94c4cb0ece3b1935 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:53.712 09:35:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:53.712 [2024-07-12 09:35:40.059661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:53.712 [2024-07-12 09:35:40.059836] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86272 ] 00:29:53.970 [2024-07-12 09:35:40.234109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.229 [2024-07-12 09:35:40.441272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.796  Copying: 211/1024 [MB] (211 MBps) Copying: 422/1024 [MB] (211 MBps) Copying: 635/1024 [MB] (213 MBps) Copying: 847/1024 [MB] (212 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:30:00.796 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:00.796 Calculate MD5 checksum, iteration 2 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:00.796 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:00.797 09:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:00.797 [2024-07-12 09:35:46.896483] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:00.797 [2024-07-12 09:35:46.896659] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86342 ] 00:30:00.797 [2024-07-12 09:35:47.067896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.064 [2024-07-12 09:35:47.248582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.187  Copying: 505/1024 [MB] (505 MBps) Copying: 995/1024 [MB] (490 MBps) Copying: 1024/1024 [MB] (average 497 MBps) 00:30:05.187 00:30:05.187 09:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:05.187 09:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:07.089 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:07.089 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=da4f06f655923aaea7a1e58169c8c689 00:30:07.089 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:07.089 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:07.089 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:07.348 [2024-07-12 09:35:53.462372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.348 [2024-07-12 09:35:53.462435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:07.348 [2024-07-12 09:35:53.462474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:07.348 [2024-07-12 09:35:53.462501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.348 [2024-07-12 09:35:53.462537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.348 [2024-07-12 09:35:53.462569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:07.348 [2024-07-12 09:35:53.462596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:07.348 [2024-07-12 09:35:53.462630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.348 [2024-07-12 09:35:53.462659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.348 [2024-07-12 09:35:53.462673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:07.348 [2024-07-12 09:35:53.462712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:07.348 [2024-07-12 09:35:53.462724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.348 [2024-07-12 09:35:53.462801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.422 ms, result 0 00:30:07.348 true 00:30:07.348 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.607 { 00:30:07.607 "name": "ftl", 00:30:07.607 "properties": [ 00:30:07.607 { 00:30:07.607 "name": "superblock_version", 00:30:07.607 "value": 5, 00:30:07.607 "read-only": true 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "name": "base_device", 00:30:07.607 "bands": [ 00:30:07.607 { 00:30:07.607 "id": 0, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 1, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 2, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 3, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 4, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 5, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 6, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 7, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 8, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 9, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 10, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 11, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 12, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 13, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 14, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 15, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 16, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 17, 00:30:07.607 "state": "FREE", 00:30:07.607 "validity": 0.0 00:30:07.607 } 00:30:07.607 ], 00:30:07.607 "read-only": true 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "name": "cache_device", 00:30:07.607 "type": "bdev", 00:30:07.607 "chunks": [ 00:30:07.607 { 00:30:07.607 "id": 0, 00:30:07.607 "state": "INACTIVE", 00:30:07.607 "utilization": 0.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 1, 00:30:07.607 "state": "CLOSED", 00:30:07.607 "utilization": 1.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 2, 00:30:07.607 "state": "CLOSED", 00:30:07.607 "utilization": 1.0 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 3, 00:30:07.607 "state": "OPEN", 00:30:07.607 "utilization": 0.001953125 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "id": 4, 00:30:07.607 "state": "OPEN", 00:30:07.607 "utilization": 0.0 00:30:07.607 } 00:30:07.607 ], 00:30:07.607 "read-only": true 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "name": "verbose_mode", 00:30:07.607 "value": true, 00:30:07.607 "unit": "", 00:30:07.607 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:07.607 }, 00:30:07.607 { 00:30:07.607 "name": "prep_upgrade_on_shutdown", 00:30:07.607 "value": false, 00:30:07.607 "unit": "", 00:30:07.607 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:07.607 } 00:30:07.607 ] 00:30:07.607 } 00:30:07.607 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:07.607 [2024-07-12 09:35:53.937145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.607 [2024-07-12 09:35:53.937234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:07.607 [2024-07-12 09:35:53.937272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:07.607 [2024-07-12 09:35:53.937284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.607 [2024-07-12 09:35:53.937321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.607 [2024-07-12 09:35:53.937355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:07.607 [2024-07-12 09:35:53.937367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:07.607 [2024-07-12 09:35:53.937379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.607 [2024-07-12 09:35:53.937407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.607 [2024-07-12 09:35:53.937421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:07.607 [2024-07-12 09:35:53.937433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:07.607 [2024-07-12 09:35:53.937444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.607 [2024-07-12 09:35:53.937519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.363 ms, result 0 00:30:07.607 true 00:30:07.607 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:07.607 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.607 09:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:08.174 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:08.174 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:08.174 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:08.174 [2024-07-12 09:35:54.431727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.174 [2024-07-12 09:35:54.431790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:08.174 [2024-07-12 09:35:54.431827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:08.174 [2024-07-12 09:35:54.431854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.174 [2024-07-12 09:35:54.431891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.174 [2024-07-12 09:35:54.431907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:08.174 [2024-07-12 09:35:54.431919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:08.174 [2024-07-12 09:35:54.431943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.174 [2024-07-12 09:35:54.431970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.174 [2024-07-12 09:35:54.431983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:08.174 [2024-07-12 09:35:54.432009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:08.174 [2024-07-12 09:35:54.432019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.174 [2024-07-12 09:35:54.432088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.353 ms, result 0 00:30:08.174 true 00:30:08.174 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:08.432 { 00:30:08.432 "name": "ftl", 00:30:08.432 "properties": [ 00:30:08.432 { 00:30:08.432 "name": "superblock_version", 00:30:08.432 "value": 5, 00:30:08.432 "read-only": true 00:30:08.432 }, 00:30:08.432 { 00:30:08.432 "name": "base_device", 00:30:08.432 "bands": [ 00:30:08.432 { 00:30:08.432 "id": 0, 00:30:08.432 "state": "FREE", 00:30:08.432 "validity": 0.0 00:30:08.432 }, 00:30:08.432 { 00:30:08.432 "id": 1, 00:30:08.432 "state": "FREE", 00:30:08.432 "validity": 0.0 00:30:08.432 }, 00:30:08.432 { 00:30:08.432 "id": 2, 00:30:08.432 "state": "FREE", 00:30:08.432 "validity": 0.0 00:30:08.432 }, 00:30:08.432 { 00:30:08.432 "id": 3, 00:30:08.432 "state": "FREE", 00:30:08.432 "validity": 0.0 00:30:08.432 }, 00:30:08.432 { 00:30:08.432 "id": 4, 00:30:08.432 "state": "FREE", 00:30:08.432 "validity": 0.0 00:30:08.432 }, 00:30:08.432 { 00:30:08.432 "id": 5, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 6, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 7, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 8, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 9, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 10, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 11, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 12, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 13, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 14, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 15, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 16, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 17, 00:30:08.433 "state": "FREE", 00:30:08.433 "validity": 0.0 00:30:08.433 } 00:30:08.433 ], 00:30:08.433 "read-only": true 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "name": "cache_device", 00:30:08.433 "type": "bdev", 00:30:08.433 "chunks": [ 00:30:08.433 { 00:30:08.433 "id": 0, 00:30:08.433 "state": "INACTIVE", 00:30:08.433 "utilization": 0.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 1, 00:30:08.433 "state": "CLOSED", 00:30:08.433 "utilization": 1.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 2, 00:30:08.433 "state": "CLOSED", 00:30:08.433 "utilization": 1.0 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 3, 00:30:08.433 "state": "OPEN", 00:30:08.433 "utilization": 0.001953125 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "id": 4, 00:30:08.433 "state": "OPEN", 00:30:08.433 "utilization": 0.0 00:30:08.433 } 00:30:08.433 ], 00:30:08.433 "read-only": true 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "name": "verbose_mode", 00:30:08.433 "value": true, 00:30:08.433 "unit": "", 00:30:08.433 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:08.433 }, 00:30:08.433 { 00:30:08.433 "name": "prep_upgrade_on_shutdown", 00:30:08.433 "value": true, 00:30:08.433 "unit": "", 00:30:08.433 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:08.433 } 00:30:08.433 ] 00:30:08.433 } 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85976 ]] 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85976 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85976 ']' 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85976 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85976 00:30:08.433 killing process with pid 85976 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85976' 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85976 00:30:08.433 09:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85976 00:30:09.368 [2024-07-12 09:35:55.579031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:09.368 [2024-07-12 09:35:55.596681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.368 [2024-07-12 09:35:55.596732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:09.368 [2024-07-12 09:35:55.596768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:09.368 [2024-07-12 09:35:55.596779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.368 [2024-07-12 09:35:55.596810] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:09.368 [2024-07-12 09:35:55.600045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.368 [2024-07-12 09:35:55.600077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:09.368 [2024-07-12 09:35:55.600107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.215 ms 00:30:09.368 [2024-07-12 09:35:55.600117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.108881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.108994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:19.363 [2024-07-12 09:36:04.109032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8508.779 ms 00:30:19.363 [2024-07-12 09:36:04.109044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.110519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.110553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:19.363 [2024-07-12 09:36:04.110576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.447 ms 00:30:19.363 [2024-07-12 09:36:04.110587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.111896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.111967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:19.363 [2024-07-12 09:36:04.111982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.265 ms 00:30:19.363 [2024-07-12 09:36:04.112008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.124172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.124252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:19.363 [2024-07-12 09:36:04.124287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.097 ms 00:30:19.363 [2024-07-12 09:36:04.124298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.131708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.131752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:19.363 [2024-07-12 09:36:04.131785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.369 ms 00:30:19.363 [2024-07-12 09:36:04.131796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.131940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.131978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:19.363 [2024-07-12 09:36:04.131990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:30:19.363 [2024-07-12 09:36:04.132001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.143481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.143518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:30:19.363 [2024-07-12 09:36:04.143548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.444 ms 00:30:19.363 [2024-07-12 09:36:04.143558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.155151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.155211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:30:19.363 [2024-07-12 09:36:04.155244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.557 ms 00:30:19.363 [2024-07-12 09:36:04.155254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.166393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.166442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:19.363 [2024-07-12 09:36:04.166473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.102 ms 00:30:19.363 [2024-07-12 09:36:04.166483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.179032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.179071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:19.363 [2024-07-12 09:36:04.179103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.465 ms 00:30:19.363 [2024-07-12 09:36:04.179113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.179150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:19.363 [2024-07-12 09:36:04.179177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:19.363 [2024-07-12 09:36:04.179209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:19.363 [2024-07-12 09:36:04.179239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:19.363 [2024-07-12 09:36:04.179251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:19.363 [2024-07-12 09:36:04.179456] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:19.363 [2024-07-12 09:36:04.179467] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1cbd8591-ddee-4dcf-bcc8-e48d9ff55618 00:30:19.363 [2024-07-12 09:36:04.179479] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:19.363 [2024-07-12 09:36:04.179490] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:19.363 [2024-07-12 09:36:04.179500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:19.363 [2024-07-12 09:36:04.179511] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:19.363 [2024-07-12 09:36:04.179522] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:19.363 [2024-07-12 09:36:04.179532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:19.363 [2024-07-12 09:36:04.179543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:19.363 [2024-07-12 09:36:04.179568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:19.363 [2024-07-12 09:36:04.179579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:19.363 [2024-07-12 09:36:04.179590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.179612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:19.363 [2024-07-12 09:36:04.179631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.442 ms 00:30:19.363 [2024-07-12 09:36:04.179643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.195737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.195781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:19.363 [2024-07-12 09:36:04.195799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.051 ms 00:30:19.363 [2024-07-12 09:36:04.195810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.196298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:19.363 [2024-07-12 09:36:04.196326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:19.363 [2024-07-12 09:36:04.196338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:30:19.363 [2024-07-12 09:36:04.196363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.242356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.363 [2024-07-12 09:36:04.242407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:19.363 [2024-07-12 09:36:04.242439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.363 [2024-07-12 09:36:04.242449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.242505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.363 [2024-07-12 09:36:04.242531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:19.363 [2024-07-12 09:36:04.242543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.363 [2024-07-12 09:36:04.242553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.242657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.363 [2024-07-12 09:36:04.242675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:19.363 [2024-07-12 09:36:04.242687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.363 [2024-07-12 09:36:04.242697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.363 [2024-07-12 09:36:04.242726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.242740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:19.364 [2024-07-12 09:36:04.242754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.242763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.327428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.327491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:19.364 [2024-07-12 09:36:04.327525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.327535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.399258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.399320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:19.364 [2024-07-12 09:36:04.399354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.399364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.399453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.399470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:19.364 [2024-07-12 09:36:04.399481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.399490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.399540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.399555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:19.364 [2024-07-12 09:36:04.399566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.399581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.399735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.399753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:19.364 [2024-07-12 09:36:04.399765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.399775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.399840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.399863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:19.364 [2024-07-12 09:36:04.399875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.399885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.399937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.399951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:19.364 [2024-07-12 09:36:04.399963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.399973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.400070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:19.364 [2024-07-12 09:36:04.400090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:19.364 [2024-07-12 09:36:04.400101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:19.364 [2024-07-12 09:36:04.400117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:19.364 [2024-07-12 09:36:04.400393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8803.679 ms, result 0 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86549 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86549 00:30:21.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86549 ']' 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.265 09:36:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:21.265 [2024-07-12 09:36:07.428541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:21.265 [2024-07-12 09:36:07.428712] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86549 ] 00:30:21.265 [2024-07-12 09:36:07.592132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.523 [2024-07-12 09:36:07.766923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.459 [2024-07-12 09:36:08.483501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:22.459 [2024-07-12 09:36:08.483593] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:22.459 [2024-07-12 09:36:08.630464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.459 [2024-07-12 09:36:08.630526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:22.459 [2024-07-12 09:36:08.630566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:22.459 [2024-07-12 09:36:08.630577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.459 [2024-07-12 09:36:08.630657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.459 [2024-07-12 09:36:08.630676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:22.459 [2024-07-12 09:36:08.630688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:22.459 [2024-07-12 09:36:08.630698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.459 [2024-07-12 09:36:08.630740] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:22.459 [2024-07-12 09:36:08.631728] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:22.459 [2024-07-12 09:36:08.631771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.459 [2024-07-12 09:36:08.631785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:22.459 [2024-07-12 09:36:08.631798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.038 ms 00:30:22.459 [2024-07-12 09:36:08.631808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.459 [2024-07-12 09:36:08.633002] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:22.459 [2024-07-12 09:36:08.648103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.459 [2024-07-12 09:36:08.648143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:22.460 [2024-07-12 09:36:08.648175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.104 ms 00:30:22.460 [2024-07-12 09:36:08.648185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.648290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.648326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:22.460 [2024-07-12 09:36:08.648339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:22.460 [2024-07-12 09:36:08.648349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.652669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.652709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:22.460 [2024-07-12 09:36:08.652739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.225 ms 00:30:22.460 [2024-07-12 09:36:08.652749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.652888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.652907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:22.460 [2024-07-12 09:36:08.652918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.108 ms 00:30:22.460 [2024-07-12 09:36:08.652932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.652992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.653008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:22.460 [2024-07-12 09:36:08.653020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:22.460 [2024-07-12 09:36:08.653030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.653064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:22.460 [2024-07-12 09:36:08.657249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.657284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:22.460 [2024-07-12 09:36:08.657315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.194 ms 00:30:22.460 [2024-07-12 09:36:08.657326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.657362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.657376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:22.460 [2024-07-12 09:36:08.657387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:22.460 [2024-07-12 09:36:08.657402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.657445] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:22.460 [2024-07-12 09:36:08.657474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:22.460 [2024-07-12 09:36:08.657520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:22.460 [2024-07-12 09:36:08.657540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:30:22.460 [2024-07-12 09:36:08.657636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:22.460 [2024-07-12 09:36:08.657650] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:22.460 [2024-07-12 09:36:08.657685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:30:22.460 [2024-07-12 09:36:08.657699] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:22.460 [2024-07-12 09:36:08.657711] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:22.460 [2024-07-12 09:36:08.657722] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:22.460 [2024-07-12 09:36:08.657732] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:22.460 [2024-07-12 09:36:08.657742] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:22.460 [2024-07-12 09:36:08.657751] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:22.460 [2024-07-12 09:36:08.657762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.657773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:22.460 [2024-07-12 09:36:08.657783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:30:22.460 [2024-07-12 09:36:08.657793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.657881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.460 [2024-07-12 09:36:08.657894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:22.460 [2024-07-12 09:36:08.657906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:22.460 [2024-07-12 09:36:08.657920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.460 [2024-07-12 09:36:08.658026] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:22.460 [2024-07-12 09:36:08.658042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:22.460 [2024-07-12 09:36:08.658053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:22.460 [2024-07-12 09:36:08.658064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:22.460 [2024-07-12 09:36:08.658084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:22.460 [2024-07-12 09:36:08.658103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:22.460 [2024-07-12 09:36:08.658115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:22.460 [2024-07-12 09:36:08.658125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:22.460 [2024-07-12 09:36:08.658144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:22.460 [2024-07-12 09:36:08.658153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:22.460 [2024-07-12 09:36:08.658173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:22.460 [2024-07-12 09:36:08.658182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:22.460 [2024-07-12 09:36:08.658253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:22.460 [2024-07-12 09:36:08.658266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:22.460 [2024-07-12 09:36:08.658287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:22.460 [2024-07-12 09:36:08.658297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.460 [2024-07-12 09:36:08.658306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:22.460 [2024-07-12 09:36:08.658316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:22.460 [2024-07-12 09:36:08.658342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.460 [2024-07-12 09:36:08.658353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:22.460 [2024-07-12 09:36:08.658363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:22.460 [2024-07-12 09:36:08.658373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.460 [2024-07-12 09:36:08.658383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:22.460 [2024-07-12 09:36:08.658393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:22.460 [2024-07-12 09:36:08.658403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:22.460 [2024-07-12 09:36:08.658413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:22.460 [2024-07-12 09:36:08.658422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:22.460 [2024-07-12 09:36:08.658432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.460 [2024-07-12 09:36:08.658442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:22.460 [2024-07-12 09:36:08.658453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:22.461 [2024-07-12 09:36:08.658462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.461 [2024-07-12 09:36:08.658472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:22.461 [2024-07-12 09:36:08.658482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:22.461 [2024-07-12 09:36:08.658492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.461 [2024-07-12 09:36:08.658502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:22.461 [2024-07-12 09:36:08.658512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:22.461 [2024-07-12 09:36:08.658521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.461 [2024-07-12 09:36:08.658531] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:22.461 [2024-07-12 09:36:08.658549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:22.461 [2024-07-12 09:36:08.658561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:22.461 [2024-07-12 09:36:08.658572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:22.461 [2024-07-12 09:36:08.658589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:22.461 [2024-07-12 09:36:08.658600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:22.461 [2024-07-12 09:36:08.658618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:22.461 [2024-07-12 09:36:08.658633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:22.461 [2024-07-12 09:36:08.658666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:22.461 [2024-07-12 09:36:08.658678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:22.461 [2024-07-12 09:36:08.658695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:22.461 [2024-07-12 09:36:08.658717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:22.461 [2024-07-12 09:36:08.658751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:22.461 [2024-07-12 09:36:08.658796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:22.461 [2024-07-12 09:36:08.658808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:22.461 [2024-07-12 09:36:08.658819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:22.461 [2024-07-12 09:36:08.658833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:22.461 [2024-07-12 09:36:08.658929] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:22.461 [2024-07-12 09:36:08.658940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:22.461 [2024-07-12 09:36:08.658964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:22.461 [2024-07-12 09:36:08.658975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:22.461 [2024-07-12 09:36:08.658986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:22.461 [2024-07-12 09:36:08.658998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:22.461 [2024-07-12 09:36:08.659010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:22.461 [2024-07-12 09:36:08.659022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.032 ms 00:30:22.461 [2024-07-12 09:36:08.659039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:22.461 [2024-07-12 09:36:08.659124] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:22.461 [2024-07-12 09:36:08.659151] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:24.363 [2024-07-12 09:36:10.594814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.595109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:24.363 [2024-07-12 09:36:10.595264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1935.701 ms 00:30:24.363 [2024-07-12 09:36:10.595320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.627843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.628200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:24.363 [2024-07-12 09:36:10.628337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.172 ms 00:30:24.363 [2024-07-12 09:36:10.628400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.628659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.628796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:24.363 [2024-07-12 09:36:10.628823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:24.363 [2024-07-12 09:36:10.628835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.666565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.666637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:24.363 [2024-07-12 09:36:10.666672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.653 ms 00:30:24.363 [2024-07-12 09:36:10.666683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.666753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.666767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:24.363 [2024-07-12 09:36:10.666779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:24.363 [2024-07-12 09:36:10.666789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.667121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.667138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:24.363 [2024-07-12 09:36:10.667155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:30:24.363 [2024-07-12 09:36:10.667165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.667263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.667282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:24.363 [2024-07-12 09:36:10.667294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:30:24.363 [2024-07-12 09:36:10.667305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.683536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.683580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:24.363 [2024-07-12 09:36:10.683636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.184 ms 00:30:24.363 [2024-07-12 09:36:10.683648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.697982] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:24.363 [2024-07-12 09:36:10.698024] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:24.363 [2024-07-12 09:36:10.698057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.698067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:24.363 [2024-07-12 09:36:10.698079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.263 ms 00:30:24.363 [2024-07-12 09:36:10.698089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.363 [2024-07-12 09:36:10.714675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.363 [2024-07-12 09:36:10.714714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:24.363 [2024-07-12 09:36:10.714746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.541 ms 00:30:24.363 [2024-07-12 09:36:10.714757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.621 [2024-07-12 09:36:10.728717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.621 [2024-07-12 09:36:10.728753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:24.621 [2024-07-12 09:36:10.728784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.916 ms 00:30:24.621 [2024-07-12 09:36:10.728794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.621 [2024-07-12 09:36:10.742442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.621 [2024-07-12 09:36:10.742493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:24.621 [2024-07-12 09:36:10.742525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.597 ms 00:30:24.621 [2024-07-12 09:36:10.742535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.621 [2024-07-12 09:36:10.743323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.621 [2024-07-12 09:36:10.743360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:24.621 [2024-07-12 09:36:10.743375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.664 ms 00:30:24.621 [2024-07-12 09:36:10.743385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.621 [2024-07-12 09:36:10.823131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.621 [2024-07-12 09:36:10.823226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:24.621 [2024-07-12 09:36:10.823263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.713 ms 00:30:24.621 [2024-07-12 09:36:10.823274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.621 [2024-07-12 09:36:10.834272] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:24.621 [2024-07-12 09:36:10.834937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.621 [2024-07-12 09:36:10.834961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:24.621 [2024-07-12 09:36:10.834975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.551 ms 00:30:24.621 [2024-07-12 09:36:10.834991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.621 [2024-07-12 09:36:10.835097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.621 [2024-07-12 09:36:10.835116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:24.621 [2024-07-12 09:36:10.835127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:24.621 [2024-07-12 09:36:10.835137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.622 [2024-07-12 09:36:10.835290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.622 [2024-07-12 09:36:10.835315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:24.622 [2024-07-12 09:36:10.835327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:30:24.622 [2024-07-12 09:36:10.835338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.622 [2024-07-12 09:36:10.835385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.622 [2024-07-12 09:36:10.835400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:24.622 [2024-07-12 09:36:10.835411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:24.622 [2024-07-12 09:36:10.835421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.622 [2024-07-12 09:36:10.835462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:24.622 [2024-07-12 09:36:10.835478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.622 [2024-07-12 09:36:10.835489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:24.622 [2024-07-12 09:36:10.835499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:24.622 [2024-07-12 09:36:10.835510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.622 [2024-07-12 09:36:10.862715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.622 [2024-07-12 09:36:10.862756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:24.622 [2024-07-12 09:36:10.862788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.180 ms 00:30:24.622 [2024-07-12 09:36:10.862798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.622 [2024-07-12 09:36:10.862880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.622 [2024-07-12 09:36:10.862897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:24.622 [2024-07-12 09:36:10.862909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:24.622 [2024-07-12 09:36:10.862918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.622 [2024-07-12 09:36:10.864304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2233.281 ms, result 0 00:30:24.622 [2024-07-12 09:36:10.879099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.622 [2024-07-12 09:36:10.895108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:24.622 [2024-07-12 09:36:10.903435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:24.622 09:36:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.622 09:36:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:30:24.622 09:36:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:24.622 09:36:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:24.622 09:36:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:24.880 [2024-07-12 09:36:11.191665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.880 [2024-07-12 09:36:11.191730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:24.880 [2024-07-12 09:36:11.191751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:24.880 [2024-07-12 09:36:11.191768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.880 [2024-07-12 09:36:11.191803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.880 [2024-07-12 09:36:11.191826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:24.880 [2024-07-12 09:36:11.191839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:24.880 [2024-07-12 09:36:11.191849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.880 [2024-07-12 09:36:11.191877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.880 [2024-07-12 09:36:11.191890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:24.880 [2024-07-12 09:36:11.191902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:24.880 [2024-07-12 09:36:11.191913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.880 [2024-07-12 09:36:11.192049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.354 ms, result 0 00:30:24.880 true 00:30:24.880 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:25.139 { 00:30:25.139 "name": "ftl", 00:30:25.139 "properties": [ 00:30:25.139 { 00:30:25.139 "name": "superblock_version", 00:30:25.139 "value": 5, 00:30:25.139 "read-only": true 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "name": "base_device", 00:30:25.139 "bands": [ 00:30:25.139 { 00:30:25.139 "id": 0, 00:30:25.139 "state": "CLOSED", 00:30:25.139 "validity": 1.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 1, 00:30:25.139 "state": "CLOSED", 00:30:25.139 "validity": 1.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 2, 00:30:25.139 "state": "CLOSED", 00:30:25.139 "validity": 0.007843137254901933 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 3, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 4, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 5, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 6, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 7, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 8, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 9, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 10, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 11, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 12, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 13, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 14, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 15, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 16, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 17, 00:30:25.139 "state": "FREE", 00:30:25.139 "validity": 0.0 00:30:25.139 } 00:30:25.139 ], 00:30:25.139 "read-only": true 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "name": "cache_device", 00:30:25.139 "type": "bdev", 00:30:25.139 "chunks": [ 00:30:25.139 { 00:30:25.139 "id": 0, 00:30:25.139 "state": "INACTIVE", 00:30:25.139 "utilization": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 1, 00:30:25.139 "state": "OPEN", 00:30:25.139 "utilization": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 2, 00:30:25.139 "state": "OPEN", 00:30:25.139 "utilization": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 3, 00:30:25.139 "state": "FREE", 00:30:25.139 "utilization": 0.0 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "id": 4, 00:30:25.139 "state": "FREE", 00:30:25.139 "utilization": 0.0 00:30:25.139 } 00:30:25.139 ], 00:30:25.139 "read-only": true 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "name": "verbose_mode", 00:30:25.139 "value": true, 00:30:25.139 "unit": "", 00:30:25.139 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:25.139 }, 00:30:25.139 { 00:30:25.139 "name": "prep_upgrade_on_shutdown", 00:30:25.139 "value": false, 00:30:25.139 "unit": "", 00:30:25.139 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:25.139 } 00:30:25.139 ] 00:30:25.139 } 00:30:25.139 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:25.139 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:25.139 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:25.705 Validate MD5 checksum, iteration 1 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:25.705 09:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:25.963 [2024-07-12 09:36:12.080705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:25.963 [2024-07-12 09:36:12.081097] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86609 ] 00:30:25.963 [2024-07-12 09:36:12.238015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.221 [2024-07-12 09:36:12.416730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.330  Copying: 510/1024 [MB] (510 MBps) Copying: 976/1024 [MB] (466 MBps) Copying: 1024/1024 [MB] (average 484 MBps) 00:30:30.330 00:30:30.330 09:36:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:30.330 09:36:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:32.860 Validate MD5 checksum, iteration 2 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0eef93adcbcfb20a94c4cb0ece3b1935 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0eef93adcbcfb20a94c4cb0ece3b1935 != \0\e\e\f\9\3\a\d\c\b\c\f\b\2\0\a\9\4\c\4\c\b\0\e\c\e\3\b\1\9\3\5 ]] 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:32.860 09:36:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:32.860 [2024-07-12 09:36:18.784312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:32.860 [2024-07-12 09:36:18.784466] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86682 ] 00:30:32.860 [2024-07-12 09:36:18.949793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.860 [2024-07-12 09:36:19.152152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.009  Copying: 484/1024 [MB] (484 MBps) Copying: 970/1024 [MB] (486 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:30:39.009 00:30:39.009 09:36:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:39.009 09:36:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=da4f06f655923aaea7a1e58169c8c689 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ da4f06f655923aaea7a1e58169c8c689 != \d\a\4\f\0\6\f\6\5\5\9\2\3\a\a\e\a\7\a\1\e\5\8\1\6\9\c\8\c\6\8\9 ]] 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86549 ]] 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86549 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:40.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.916 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86768 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86768 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86768 ']' 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:40.917 09:36:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:41.176 [2024-07-12 09:36:27.313865] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:41.176 [2024-07-12 09:36:27.314976] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86768 ] 00:30:41.176 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86549 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:41.176 [2024-07-12 09:36:27.480144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.435 [2024-07-12 09:36:27.644143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.371 [2024-07-12 09:36:28.376559] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.371 [2024-07-12 09:36:28.376868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.371 [2024-07-12 09:36:28.523977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.524271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:42.371 [2024-07-12 09:36:28.524432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:42.371 [2024-07-12 09:36:28.524561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.524714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.524740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:42.371 [2024-07-12 09:36:28.524754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:42.371 [2024-07-12 09:36:28.524765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.524819] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:42.371 [2024-07-12 09:36:28.525789] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:42.371 [2024-07-12 09:36:28.525848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.525862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:42.371 [2024-07-12 09:36:28.525874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.037 ms 00:30:42.371 [2024-07-12 09:36:28.525885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.526379] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:42.371 [2024-07-12 09:36:28.544625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.544665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:42.371 [2024-07-12 09:36:28.544698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.248 ms 00:30:42.371 [2024-07-12 09:36:28.544714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.555584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.555665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:42.371 [2024-07-12 09:36:28.555683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:42.371 [2024-07-12 09:36:28.555695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.556213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.556255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:42.371 [2024-07-12 09:36:28.556293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:30:42.371 [2024-07-12 09:36:28.556304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.556366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.556384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:42.371 [2024-07-12 09:36:28.556412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:42.371 [2024-07-12 09:36:28.556438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.556495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.371 [2024-07-12 09:36:28.556511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:42.371 [2024-07-12 09:36:28.556523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:42.371 [2024-07-12 09:36:28.556538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.371 [2024-07-12 09:36:28.556572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:42.372 [2024-07-12 09:36:28.560531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.372 [2024-07-12 09:36:28.560569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:42.372 [2024-07-12 09:36:28.560616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.966 ms 00:30:42.372 [2024-07-12 09:36:28.560642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.372 [2024-07-12 09:36:28.560693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.372 [2024-07-12 09:36:28.560709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:42.372 [2024-07-12 09:36:28.560720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:42.372 [2024-07-12 09:36:28.560731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.372 [2024-07-12 09:36:28.560777] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:42.372 [2024-07-12 09:36:28.560808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:42.372 [2024-07-12 09:36:28.560850] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:42.372 [2024-07-12 09:36:28.560871] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:30:42.372 [2024-07-12 09:36:28.560968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:42.372 [2024-07-12 09:36:28.560983] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:42.372 [2024-07-12 09:36:28.560997] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:30:42.372 [2024-07-12 09:36:28.561026] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561038] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561050] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:42.372 [2024-07-12 09:36:28.561060] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:42.372 [2024-07-12 09:36:28.561075] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:42.372 [2024-07-12 09:36:28.561084] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:42.372 [2024-07-12 09:36:28.561095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.372 [2024-07-12 09:36:28.561105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:42.372 [2024-07-12 09:36:28.561120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:30:42.372 [2024-07-12 09:36:28.561131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.372 [2024-07-12 09:36:28.561214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.372 [2024-07-12 09:36:28.561228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:42.372 [2024-07-12 09:36:28.561239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:30:42.372 [2024-07-12 09:36:28.561249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.372 [2024-07-12 09:36:28.561417] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:42.372 [2024-07-12 09:36:28.561436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:42.372 [2024-07-12 09:36:28.561449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:42.372 [2024-07-12 09:36:28.561482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:42.372 [2024-07-12 09:36:28.561504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:42.372 [2024-07-12 09:36:28.561515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:42.372 [2024-07-12 09:36:28.561525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:42.372 [2024-07-12 09:36:28.561546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:42.372 [2024-07-12 09:36:28.561557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:42.372 [2024-07-12 09:36:28.561594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:42.372 [2024-07-12 09:36:28.561605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:42.372 [2024-07-12 09:36:28.561627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:42.372 [2024-07-12 09:36:28.561638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:42.372 [2024-07-12 09:36:28.561659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:42.372 [2024-07-12 09:36:28.561670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:42.372 [2024-07-12 09:36:28.561692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:42.372 [2024-07-12 09:36:28.561703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:42.372 [2024-07-12 09:36:28.561724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:42.372 [2024-07-12 09:36:28.561738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:42.372 [2024-07-12 09:36:28.561763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:42.372 [2024-07-12 09:36:28.561776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:42.372 [2024-07-12 09:36:28.561801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:42.372 [2024-07-12 09:36:28.561813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:42.372 [2024-07-12 09:36:28.561837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:42.372 [2024-07-12 09:36:28.561849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:42.372 [2024-07-12 09:36:28.561873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:42.372 [2024-07-12 09:36:28.561908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:42.372 [2024-07-12 09:36:28.561931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.561943] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:42.372 [2024-07-12 09:36:28.561957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:42.372 [2024-07-12 09:36:28.562000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:42.372 [2024-07-12 09:36:28.562013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:42.372 [2024-07-12 09:36:28.562025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:42.372 [2024-07-12 09:36:28.562036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:42.372 [2024-07-12 09:36:28.562061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:42.372 [2024-07-12 09:36:28.562072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:42.372 [2024-07-12 09:36:28.562082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:42.372 [2024-07-12 09:36:28.562093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:42.372 [2024-07-12 09:36:28.562105] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:42.372 [2024-07-12 09:36:28.562124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.372 [2024-07-12 09:36:28.562136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:42.373 [2024-07-12 09:36:28.562147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:42.373 [2024-07-12 09:36:28.562181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:42.373 [2024-07-12 09:36:28.562192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:42.373 [2024-07-12 09:36:28.562217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:42.373 [2024-07-12 09:36:28.562229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:42.373 [2024-07-12 09:36:28.562323] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:42.373 [2024-07-12 09:36:28.562335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:42.373 [2024-07-12 09:36:28.562357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:42.373 [2024-07-12 09:36:28.562369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:42.373 [2024-07-12 09:36:28.562381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:42.373 [2024-07-12 09:36:28.562392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.562405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:42.373 [2024-07-12 09:36:28.562417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.055 ms 00:30:42.373 [2024-07-12 09:36:28.562428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.593628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.593880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:42.373 [2024-07-12 09:36:28.594038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.129 ms 00:30:42.373 [2024-07-12 09:36:28.594091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.594402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.594468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:42.373 [2024-07-12 09:36:28.594697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:42.373 [2024-07-12 09:36:28.594760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.628912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.629130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:42.373 [2024-07-12 09:36:28.629321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.951 ms 00:30:42.373 [2024-07-12 09:36:28.629377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.629449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.629468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:42.373 [2024-07-12 09:36:28.629482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:42.373 [2024-07-12 09:36:28.629494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.629672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.629690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:42.373 [2024-07-12 09:36:28.629703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:30:42.373 [2024-07-12 09:36:28.629714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.629769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.629788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:42.373 [2024-07-12 09:36:28.629816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:42.373 [2024-07-12 09:36:28.629826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.645358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.645397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:42.373 [2024-07-12 09:36:28.645441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.504 ms 00:30:42.373 [2024-07-12 09:36:28.645452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.645570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.645619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:42.373 [2024-07-12 09:36:28.645634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:42.373 [2024-07-12 09:36:28.645644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.675223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.675262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:42.373 [2024-07-12 09:36:28.675296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.556 ms 00:30:42.373 [2024-07-12 09:36:28.675307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.373 [2024-07-12 09:36:28.686015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.373 [2024-07-12 09:36:28.686068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:42.373 [2024-07-12 09:36:28.686100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:30:42.373 [2024-07-12 09:36:28.686110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.632 [2024-07-12 09:36:28.754350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.632 [2024-07-12 09:36:28.754576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:42.632 [2024-07-12 09:36:28.754605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.173 ms 00:30:42.632 [2024-07-12 09:36:28.754617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.632 [2024-07-12 09:36:28.754811] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:42.632 [2024-07-12 09:36:28.754959] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:42.632 [2024-07-12 09:36:28.755083] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:42.632 [2024-07-12 09:36:28.755231] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:42.632 [2024-07-12 09:36:28.755264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.632 [2024-07-12 09:36:28.755275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:42.632 [2024-07-12 09:36:28.755287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:30:42.632 [2024-07-12 09:36:28.755302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.632 [2024-07-12 09:36:28.755409] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:42.632 [2024-07-12 09:36:28.755430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.632 [2024-07-12 09:36:28.755441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:42.632 [2024-07-12 09:36:28.755452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:42.632 [2024-07-12 09:36:28.755462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.632 [2024-07-12 09:36:28.772447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.632 [2024-07-12 09:36:28.772486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:42.632 [2024-07-12 09:36:28.772523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.956 ms 00:30:42.632 [2024-07-12 09:36:28.772533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.632 [2024-07-12 09:36:28.782703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:42.632 [2024-07-12 09:36:28.782741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:42.632 [2024-07-12 09:36:28.782773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:42.632 [2024-07-12 09:36:28.782784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:42.632 [2024-07-12 09:36:28.782989] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:43.200 [2024-07-12 09:36:29.355135] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:43.200 [2024-07-12 09:36:29.355440] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:43.768 [2024-07-12 09:36:29.926699] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:43.768 [2024-07-12 09:36:29.926844] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:43.768 [2024-07-12 09:36:29.926869] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:43.768 [2024-07-12 09:36:29.926886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.926899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:43.768 [2024-07-12 09:36:29.926915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1144.024 ms 00:30:43.768 [2024-07-12 09:36:29.926943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.927035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.927050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:43.768 [2024-07-12 09:36:29.927062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:43.768 [2024-07-12 09:36:29.927073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.938719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:43.768 [2024-07-12 09:36:29.938866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.938889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:43.768 [2024-07-12 09:36:29.938902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.766 ms 00:30:43.768 [2024-07-12 09:36:29.938913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.939720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.939760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:43.768 [2024-07-12 09:36:29.939788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.696 ms 00:30:43.768 [2024-07-12 09:36:29.939802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.942219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.942246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:43.768 [2024-07-12 09:36:29.942274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.374 ms 00:30:43.768 [2024-07-12 09:36:29.942284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.942332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.942347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:43.768 [2024-07-12 09:36:29.942359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:43.768 [2024-07-12 09:36:29.942368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.942480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.942501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:43.768 [2024-07-12 09:36:29.942512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:43.768 [2024-07-12 09:36:29.942521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.942547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.942559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:43.768 [2024-07-12 09:36:29.942575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:43.768 [2024-07-12 09:36:29.942585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.942639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:43.768 [2024-07-12 09:36:29.942656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.942667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:43.768 [2024-07-12 09:36:29.942682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:43.768 [2024-07-12 09:36:29.942693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.942748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.768 [2024-07-12 09:36:29.942762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:43.768 [2024-07-12 09:36:29.942773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:43.768 [2024-07-12 09:36:29.942783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.768 [2024-07-12 09:36:29.944069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1419.528 ms, result 0 00:30:43.768 [2024-07-12 09:36:29.959257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.768 [2024-07-12 09:36:29.975272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:43.769 [2024-07-12 09:36:29.983585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:43.769 Validate MD5 checksum, iteration 1 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:43.769 09:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:43.769 [2024-07-12 09:36:30.093955] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:43.769 [2024-07-12 09:36:30.094392] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86803 ] 00:30:44.027 [2024-07-12 09:36:30.253929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.286 [2024-07-12 09:36:30.415469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.406  Copying: 475/1024 [MB] (475 MBps) Copying: 946/1024 [MB] (471 MBps) Copying: 1024/1024 [MB] (average 474 MBps) 00:30:48.406 00:30:48.406 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:48.406 09:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0eef93adcbcfb20a94c4cb0ece3b1935 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0eef93adcbcfb20a94c4cb0ece3b1935 != \0\e\e\f\9\3\a\d\c\b\c\f\b\2\0\a\9\4\c\4\c\b\0\e\c\e\3\b\1\9\3\5 ]] 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:50.316 Validate MD5 checksum, iteration 2 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:50.316 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:50.317 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:50.317 09:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:50.588 [2024-07-12 09:36:36.707347] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:50.588 [2024-07-12 09:36:36.708374] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86870 ] 00:30:50.588 [2024-07-12 09:36:36.889170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.846 [2024-07-12 09:36:37.057383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.109  Copying: 439/1024 [MB] (439 MBps) Copying: 903/1024 [MB] (464 MBps) Copying: 1024/1024 [MB] (average 451 MBps) 00:30:56.109 00:30:56.109 09:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:56.109 09:36:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=da4f06f655923aaea7a1e58169c8c689 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ da4f06f655923aaea7a1e58169c8c689 != \d\a\4\f\0\6\f\6\5\5\9\2\3\a\a\e\a\7\a\1\e\5\8\1\6\9\c\8\c\6\8\9 ]] 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86768 ]] 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86768 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86768 ']' 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86768 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86768 00:30:58.637 killing process with pid 86768 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86768' 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86768 00:30:58.637 09:36:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86768 00:30:59.572 [2024-07-12 09:36:45.585869] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:59.572 [2024-07-12 09:36:45.603771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.603819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:59.572 [2024-07-12 09:36:45.603841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:59.572 [2024-07-12 09:36:45.603854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.603885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:59.572 [2024-07-12 09:36:45.607270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.607299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:59.572 [2024-07-12 09:36:45.607329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.363 ms 00:30:59.572 [2024-07-12 09:36:45.607340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.607547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.607563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:59.572 [2024-07-12 09:36:45.607580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:30:59.572 [2024-07-12 09:36:45.607591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.609067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.609108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:59.572 [2024-07-12 09:36:45.609156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.415 ms 00:30:59.572 [2024-07-12 09:36:45.609167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.610570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.610598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:59.572 [2024-07-12 09:36:45.610651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.278 ms 00:30:59.572 [2024-07-12 09:36:45.610662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.623533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.623571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:59.572 [2024-07-12 09:36:45.623604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.804 ms 00:30:59.572 [2024-07-12 09:36:45.623659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.630401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.630438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:59.572 [2024-07-12 09:36:45.630475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.697 ms 00:30:59.572 [2024-07-12 09:36:45.630485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.630560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.630578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:59.572 [2024-07-12 09:36:45.630590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:59.572 [2024-07-12 09:36:45.630600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.643038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.643073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:30:59.572 [2024-07-12 09:36:45.643104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.371 ms 00:30:59.572 [2024-07-12 09:36:45.643114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.655918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.655959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:30:59.572 [2024-07-12 09:36:45.655975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.766 ms 00:30:59.572 [2024-07-12 09:36:45.655987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.668542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.668577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:59.572 [2024-07-12 09:36:45.668608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.471 ms 00:30:59.572 [2024-07-12 09:36:45.668652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.680824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.680863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:59.572 [2024-07-12 09:36:45.680894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.098 ms 00:30:59.572 [2024-07-12 09:36:45.680906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.680961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:59.572 [2024-07-12 09:36:45.680998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:59.572 [2024-07-12 09:36:45.681042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:59.572 [2024-07-12 09:36:45.681053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:59.572 [2024-07-12 09:36:45.681064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.572 [2024-07-12 09:36:45.681221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:59.572 [2024-07-12 09:36:45.681281] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1cbd8591-ddee-4dcf-bcc8-e48d9ff55618 00:30:59.572 [2024-07-12 09:36:45.681296] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:59.572 [2024-07-12 09:36:45.681306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:59.572 [2024-07-12 09:36:45.681316] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:59.572 [2024-07-12 09:36:45.681326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:59.572 [2024-07-12 09:36:45.681335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:59.572 [2024-07-12 09:36:45.681345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:59.572 [2024-07-12 09:36:45.681355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:59.572 [2024-07-12 09:36:45.681364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:59.572 [2024-07-12 09:36:45.681373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:59.572 [2024-07-12 09:36:45.681384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.681395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:59.572 [2024-07-12 09:36:45.681407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.425 ms 00:30:59.572 [2024-07-12 09:36:45.681418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.699075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.699111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:59.572 [2024-07-12 09:36:45.699127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.631 ms 00:30:59.572 [2024-07-12 09:36:45.699138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.699642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.572 [2024-07-12 09:36:45.699668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:59.572 [2024-07-12 09:36:45.699681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:30:59.572 [2024-07-12 09:36:45.699700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.751592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.572 [2024-07-12 09:36:45.751823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:59.572 [2024-07-12 09:36:45.751969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.572 [2024-07-12 09:36:45.752038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.752148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.572 [2024-07-12 09:36:45.752349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:59.572 [2024-07-12 09:36:45.752412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.572 [2024-07-12 09:36:45.752546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.752727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.572 [2024-07-12 09:36:45.752793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:59.572 [2024-07-12 09:36:45.752909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.572 [2024-07-12 09:36:45.752954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.753037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.572 [2024-07-12 09:36:45.753084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:59.572 [2024-07-12 09:36:45.753129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.572 [2024-07-12 09:36:45.753314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.572 [2024-07-12 09:36:45.850406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.572 [2024-07-12 09:36:45.850646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:59.572 [2024-07-12 09:36:45.850795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.572 [2024-07-12 09:36:45.850856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.933360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.933635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:59.831 [2024-07-12 09:36:45.933802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.933832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.933947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.933967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:59.831 [2024-07-12 09:36:45.933980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.934006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.934092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.934122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:59.831 [2024-07-12 09:36:45.934133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.934143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.934295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.934315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:59.831 [2024-07-12 09:36:45.934328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.934345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.934433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.934450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:59.831 [2024-07-12 09:36:45.934462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.934473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.934529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.934565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:59.831 [2024-07-12 09:36:45.934577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.934588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.934655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:59.831 [2024-07-12 09:36:45.934678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:59.831 [2024-07-12 09:36:45.934692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:59.831 [2024-07-12 09:36:45.934703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-07-12 09:36:45.934846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 331.058 ms, result 0 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:00.766 Remove shared memory files 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86549 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:00.766 ************************************ 00:31:00.766 END TEST ftl_upgrade_shutdown 00:31:00.766 ************************************ 00:31:00.766 00:31:00.766 real 1m32.094s 00:31:00.766 user 2m11.894s 00:31:00.766 sys 0m22.299s 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.766 09:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:00.766 09:36:47 ftl -- common/autotest_common.sh@1142 -- # return 0 00:31:00.766 09:36:47 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:00.766 09:36:47 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:00.766 09:36:47 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:31:00.766 09:36:47 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.766 09:36:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:00.766 ************************************ 00:31:00.766 START TEST ftl_restore_fast 00:31:00.766 ************************************ 00:31:00.766 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:01.025 * Looking for test storage... 00:31:01.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:01.025 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cCIo5qTRHn 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:01.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=87045 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 87045 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 87045 ']' 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.026 09:36:47 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:01.026 [2024-07-12 09:36:47.323142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:01.026 [2024-07-12 09:36:47.323548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87045 ] 00:31:01.285 [2024-07-12 09:36:47.491841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.543 [2024-07-12 09:36:47.666930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:02.123 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:02.387 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:02.646 { 00:31:02.646 "name": "nvme0n1", 00:31:02.646 "aliases": [ 00:31:02.646 "0b721d68-a1fd-46dc-b9c2-17a526ab7625" 00:31:02.646 ], 00:31:02.646 "product_name": "NVMe disk", 00:31:02.646 "block_size": 4096, 00:31:02.646 "num_blocks": 1310720, 00:31:02.646 "uuid": "0b721d68-a1fd-46dc-b9c2-17a526ab7625", 00:31:02.646 "assigned_rate_limits": { 00:31:02.646 "rw_ios_per_sec": 0, 00:31:02.646 "rw_mbytes_per_sec": 0, 00:31:02.646 "r_mbytes_per_sec": 0, 00:31:02.646 "w_mbytes_per_sec": 0 00:31:02.646 }, 00:31:02.646 "claimed": true, 00:31:02.646 "claim_type": "read_many_write_one", 00:31:02.646 "zoned": false, 00:31:02.646 "supported_io_types": { 00:31:02.646 "read": true, 00:31:02.646 "write": true, 00:31:02.646 "unmap": true, 00:31:02.646 "flush": true, 00:31:02.646 "reset": true, 00:31:02.646 "nvme_admin": true, 00:31:02.646 "nvme_io": true, 00:31:02.646 "nvme_io_md": false, 00:31:02.646 "write_zeroes": true, 00:31:02.646 "zcopy": false, 00:31:02.646 "get_zone_info": false, 00:31:02.646 "zone_management": false, 00:31:02.646 "zone_append": false, 00:31:02.646 "compare": true, 00:31:02.646 "compare_and_write": false, 00:31:02.646 "abort": true, 00:31:02.646 "seek_hole": false, 00:31:02.646 "seek_data": false, 00:31:02.646 "copy": true, 00:31:02.646 "nvme_iov_md": false 00:31:02.646 }, 00:31:02.646 "driver_specific": { 00:31:02.646 "nvme": [ 00:31:02.646 { 00:31:02.646 "pci_address": "0000:00:11.0", 00:31:02.646 "trid": { 00:31:02.646 "trtype": "PCIe", 00:31:02.646 "traddr": "0000:00:11.0" 00:31:02.646 }, 00:31:02.646 "ctrlr_data": { 00:31:02.646 "cntlid": 0, 00:31:02.646 "vendor_id": "0x1b36", 00:31:02.646 "model_number": "QEMU NVMe Ctrl", 00:31:02.646 "serial_number": "12341", 00:31:02.646 "firmware_revision": "8.0.0", 00:31:02.646 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:02.646 "oacs": { 00:31:02.646 "security": 0, 00:31:02.646 "format": 1, 00:31:02.646 "firmware": 0, 00:31:02.646 "ns_manage": 1 00:31:02.646 }, 00:31:02.646 "multi_ctrlr": false, 00:31:02.646 "ana_reporting": false 00:31:02.646 }, 00:31:02.646 "vs": { 00:31:02.646 "nvme_version": "1.4" 00:31:02.646 }, 00:31:02.646 "ns_data": { 00:31:02.646 "id": 1, 00:31:02.646 "can_share": false 00:31:02.646 } 00:31:02.646 } 00:31:02.646 ], 00:31:02.646 "mp_policy": "active_passive" 00:31:02.646 } 00:31:02.646 } 00:31:02.646 ]' 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:02.646 09:36:48 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:02.904 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=453afd9b-f505-43f3-be85-d87a99fa2442 00:31:02.904 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:02.904 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 453afd9b-f505-43f3-be85-d87a99fa2442 00:31:03.162 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:03.419 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=bcd71ede-1d42-499f-beec-d374d122e021 00:31:03.419 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bcd71ede-1d42-499f-beec-d374d122e021 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:03.676 09:36:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:03.934 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:03.934 { 00:31:03.934 "name": "8c57128e-1b4a-482f-b9bf-2cea87ec03a6", 00:31:03.934 "aliases": [ 00:31:03.934 "lvs/nvme0n1p0" 00:31:03.934 ], 00:31:03.934 "product_name": "Logical Volume", 00:31:03.934 "block_size": 4096, 00:31:03.934 "num_blocks": 26476544, 00:31:03.934 "uuid": "8c57128e-1b4a-482f-b9bf-2cea87ec03a6", 00:31:03.934 "assigned_rate_limits": { 00:31:03.934 "rw_ios_per_sec": 0, 00:31:03.934 "rw_mbytes_per_sec": 0, 00:31:03.934 "r_mbytes_per_sec": 0, 00:31:03.934 "w_mbytes_per_sec": 0 00:31:03.934 }, 00:31:03.934 "claimed": false, 00:31:03.934 "zoned": false, 00:31:03.934 "supported_io_types": { 00:31:03.934 "read": true, 00:31:03.934 "write": true, 00:31:03.934 "unmap": true, 00:31:03.934 "flush": false, 00:31:03.934 "reset": true, 00:31:03.934 "nvme_admin": false, 00:31:03.934 "nvme_io": false, 00:31:03.934 "nvme_io_md": false, 00:31:03.934 "write_zeroes": true, 00:31:03.934 "zcopy": false, 00:31:03.934 "get_zone_info": false, 00:31:03.934 "zone_management": false, 00:31:03.934 "zone_append": false, 00:31:03.934 "compare": false, 00:31:03.934 "compare_and_write": false, 00:31:03.934 "abort": false, 00:31:03.934 "seek_hole": true, 00:31:03.934 "seek_data": true, 00:31:03.934 "copy": false, 00:31:03.934 "nvme_iov_md": false 00:31:03.934 }, 00:31:03.934 "driver_specific": { 00:31:03.934 "lvol": { 00:31:03.934 "lvol_store_uuid": "bcd71ede-1d42-499f-beec-d374d122e021", 00:31:03.934 "base_bdev": "nvme0n1", 00:31:03.934 "thin_provision": true, 00:31:03.934 "num_allocated_clusters": 0, 00:31:03.934 "snapshot": false, 00:31:03.934 "clone": false, 00:31:03.934 "esnap_clone": false 00:31:03.934 } 00:31:03.934 } 00:31:03.934 } 00:31:03.934 ]' 00:31:03.934 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:04.193 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:04.451 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:04.451 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:04.451 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:04.452 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:04.452 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:04.452 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:04.452 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:04.452 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:04.759 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:04.759 { 00:31:04.759 "name": "8c57128e-1b4a-482f-b9bf-2cea87ec03a6", 00:31:04.759 "aliases": [ 00:31:04.759 "lvs/nvme0n1p0" 00:31:04.759 ], 00:31:04.759 "product_name": "Logical Volume", 00:31:04.759 "block_size": 4096, 00:31:04.759 "num_blocks": 26476544, 00:31:04.759 "uuid": "8c57128e-1b4a-482f-b9bf-2cea87ec03a6", 00:31:04.759 "assigned_rate_limits": { 00:31:04.759 "rw_ios_per_sec": 0, 00:31:04.759 "rw_mbytes_per_sec": 0, 00:31:04.759 "r_mbytes_per_sec": 0, 00:31:04.759 "w_mbytes_per_sec": 0 00:31:04.759 }, 00:31:04.759 "claimed": false, 00:31:04.759 "zoned": false, 00:31:04.759 "supported_io_types": { 00:31:04.759 "read": true, 00:31:04.759 "write": true, 00:31:04.759 "unmap": true, 00:31:04.759 "flush": false, 00:31:04.759 "reset": true, 00:31:04.759 "nvme_admin": false, 00:31:04.759 "nvme_io": false, 00:31:04.759 "nvme_io_md": false, 00:31:04.759 "write_zeroes": true, 00:31:04.759 "zcopy": false, 00:31:04.759 "get_zone_info": false, 00:31:04.759 "zone_management": false, 00:31:04.759 "zone_append": false, 00:31:04.759 "compare": false, 00:31:04.759 "compare_and_write": false, 00:31:04.759 "abort": false, 00:31:04.759 "seek_hole": true, 00:31:04.759 "seek_data": true, 00:31:04.759 "copy": false, 00:31:04.759 "nvme_iov_md": false 00:31:04.759 }, 00:31:04.759 "driver_specific": { 00:31:04.759 "lvol": { 00:31:04.759 "lvol_store_uuid": "bcd71ede-1d42-499f-beec-d374d122e021", 00:31:04.759 "base_bdev": "nvme0n1", 00:31:04.759 "thin_provision": true, 00:31:04.759 "num_allocated_clusters": 0, 00:31:04.759 "snapshot": false, 00:31:04.760 "clone": false, 00:31:04.760 "esnap_clone": false 00:31:04.760 } 00:31:04.760 } 00:31:04.760 } 00:31:04.760 ]' 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:04.760 09:36:50 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:31:05.034 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:05.293 { 00:31:05.293 "name": "8c57128e-1b4a-482f-b9bf-2cea87ec03a6", 00:31:05.293 "aliases": [ 00:31:05.293 "lvs/nvme0n1p0" 00:31:05.293 ], 00:31:05.293 "product_name": "Logical Volume", 00:31:05.293 "block_size": 4096, 00:31:05.293 "num_blocks": 26476544, 00:31:05.293 "uuid": "8c57128e-1b4a-482f-b9bf-2cea87ec03a6", 00:31:05.293 "assigned_rate_limits": { 00:31:05.293 "rw_ios_per_sec": 0, 00:31:05.293 "rw_mbytes_per_sec": 0, 00:31:05.293 "r_mbytes_per_sec": 0, 00:31:05.293 "w_mbytes_per_sec": 0 00:31:05.293 }, 00:31:05.293 "claimed": false, 00:31:05.293 "zoned": false, 00:31:05.293 "supported_io_types": { 00:31:05.293 "read": true, 00:31:05.293 "write": true, 00:31:05.293 "unmap": true, 00:31:05.293 "flush": false, 00:31:05.293 "reset": true, 00:31:05.293 "nvme_admin": false, 00:31:05.293 "nvme_io": false, 00:31:05.293 "nvme_io_md": false, 00:31:05.293 "write_zeroes": true, 00:31:05.293 "zcopy": false, 00:31:05.293 "get_zone_info": false, 00:31:05.293 "zone_management": false, 00:31:05.293 "zone_append": false, 00:31:05.293 "compare": false, 00:31:05.293 "compare_and_write": false, 00:31:05.293 "abort": false, 00:31:05.293 "seek_hole": true, 00:31:05.293 "seek_data": true, 00:31:05.293 "copy": false, 00:31:05.293 "nvme_iov_md": false 00:31:05.293 }, 00:31:05.293 "driver_specific": { 00:31:05.293 "lvol": { 00:31:05.293 "lvol_store_uuid": "bcd71ede-1d42-499f-beec-d374d122e021", 00:31:05.293 "base_bdev": "nvme0n1", 00:31:05.293 "thin_provision": true, 00:31:05.293 "num_allocated_clusters": 0, 00:31:05.293 "snapshot": false, 00:31:05.293 "clone": false, 00:31:05.293 "esnap_clone": false 00:31:05.293 } 00:31:05.293 } 00:31:05.293 } 00:31:05.293 ]' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 --l2p_dram_limit 10' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:05.293 09:36:51 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8c57128e-1b4a-482f-b9bf-2cea87ec03a6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:05.551 [2024-07-12 09:36:51.770730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.551 [2024-07-12 09:36:51.770790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:05.551 [2024-07-12 09:36:51.770827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:05.551 [2024-07-12 09:36:51.770840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.551 [2024-07-12 09:36:51.770914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.551 [2024-07-12 09:36:51.770934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.551 [2024-07-12 09:36:51.770946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:05.551 [2024-07-12 09:36:51.770968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.551 [2024-07-12 09:36:51.770995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:05.551 [2024-07-12 09:36:51.772034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:05.551 [2024-07-12 09:36:51.772089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.551 [2024-07-12 09:36:51.772124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.551 [2024-07-12 09:36:51.772137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:31:05.551 [2024-07-12 09:36:51.772151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.551 [2024-07-12 09:36:51.772293] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b198a0e0-0f51-42a0-ac11-889a0fc09615 00:31:05.551 [2024-07-12 09:36:51.773373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.773410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:05.552 [2024-07-12 09:36:51.773428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:05.552 [2024-07-12 09:36:51.773439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.778019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.778064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.552 [2024-07-12 09:36:51.778099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:31:05.552 [2024-07-12 09:36:51.778110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.778273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.778295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.552 [2024-07-12 09:36:51.778309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:31:05.552 [2024-07-12 09:36:51.778320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.778396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.778413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:05.552 [2024-07-12 09:36:51.778429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:05.552 [2024-07-12 09:36:51.778440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.778472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:05.552 [2024-07-12 09:36:51.782621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.782658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.552 [2024-07-12 09:36:51.782690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:31:05.552 [2024-07-12 09:36:51.782704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.782745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.782762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:05.552 [2024-07-12 09:36:51.782773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:05.552 [2024-07-12 09:36:51.782785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.782825] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:05.552 [2024-07-12 09:36:51.782966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:05.552 [2024-07-12 09:36:51.782983] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:05.552 [2024-07-12 09:36:51.783001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:05.552 [2024-07-12 09:36:51.783015] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783029] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783039] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:05.552 [2024-07-12 09:36:51.783053] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:05.552 [2024-07-12 09:36:51.783063] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:05.552 [2024-07-12 09:36:51.783076] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:05.552 [2024-07-12 09:36:51.783086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.783098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:05.552 [2024-07-12 09:36:51.783109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:31:05.552 [2024-07-12 09:36:51.783120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.783222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.552 [2024-07-12 09:36:51.783242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:05.552 [2024-07-12 09:36:51.783253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:05.552 [2024-07-12 09:36:51.783280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.552 [2024-07-12 09:36:51.783380] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:05.552 [2024-07-12 09:36:51.783400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:05.552 [2024-07-12 09:36:51.783423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:05.552 [2024-07-12 09:36:51.783458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:05.552 [2024-07-12 09:36:51.783488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.552 [2024-07-12 09:36:51.783509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:05.552 [2024-07-12 09:36:51.783536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:05.552 [2024-07-12 09:36:51.783561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.552 [2024-07-12 09:36:51.783575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:05.552 [2024-07-12 09:36:51.783586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:05.552 [2024-07-12 09:36:51.783597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:05.552 [2024-07-12 09:36:51.783669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:05.552 [2024-07-12 09:36:51.783704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:05.552 [2024-07-12 09:36:51.783740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:05.552 [2024-07-12 09:36:51.783774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:05.552 [2024-07-12 09:36:51.783811] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.552 [2024-07-12 09:36:51.783834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:05.552 [2024-07-12 09:36:51.783844] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.552 [2024-07-12 09:36:51.783870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:05.552 [2024-07-12 09:36:51.783882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:05.552 [2024-07-12 09:36:51.783893] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.552 [2024-07-12 09:36:51.783905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:05.552 [2024-07-12 09:36:51.783916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:05.552 [2024-07-12 09:36:51.783930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:05.552 [2024-07-12 09:36:51.783954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:05.552 [2024-07-12 09:36:51.783965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.783977] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:05.552 [2024-07-12 09:36:51.783989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:05.552 [2024-07-12 09:36:51.784016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.552 [2024-07-12 09:36:51.784027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.552 [2024-07-12 09:36:51.784041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:05.552 [2024-07-12 09:36:51.784052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:05.552 [2024-07-12 09:36:51.784066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:05.552 [2024-07-12 09:36:51.784076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:05.552 [2024-07-12 09:36:51.784089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:05.552 [2024-07-12 09:36:51.784100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:05.552 [2024-07-12 09:36:51.784116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:05.552 [2024-07-12 09:36:51.784132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.552 [2024-07-12 09:36:51.784147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:05.552 [2024-07-12 09:36:51.784158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:05.552 [2024-07-12 09:36:51.784171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:05.552 [2024-07-12 09:36:51.784183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:05.552 [2024-07-12 09:36:51.784196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:05.552 [2024-07-12 09:36:51.784207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:05.552 [2024-07-12 09:36:51.784236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:05.552 [2024-07-12 09:36:51.784249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:05.552 [2024-07-12 09:36:51.784264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:05.552 [2024-07-12 09:36:51.784276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:05.552 [2024-07-12 09:36:51.784291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:05.553 [2024-07-12 09:36:51.784303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:05.553 [2024-07-12 09:36:51.784316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:05.553 [2024-07-12 09:36:51.784328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:05.553 [2024-07-12 09:36:51.784341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:05.553 [2024-07-12 09:36:51.784354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.553 [2024-07-12 09:36:51.784369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:05.553 [2024-07-12 09:36:51.784382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:05.553 [2024-07-12 09:36:51.784395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:05.553 [2024-07-12 09:36:51.784407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:05.553 [2024-07-12 09:36:51.784422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.553 [2024-07-12 09:36:51.784433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:05.553 [2024-07-12 09:36:51.784447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:31:05.553 [2024-07-12 09:36:51.784458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.553 [2024-07-12 09:36:51.784526] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:05.553 [2024-07-12 09:36:51.784544] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:08.079 [2024-07-12 09:36:53.926855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.079 [2024-07-12 09:36:53.926922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:08.080 [2024-07-12 09:36:53.926963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2142.336 ms 00:31:08.080 [2024-07-12 09:36:53.926992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:53.954944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:53.954998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:08.080 [2024-07-12 09:36:53.955052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.683 ms 00:31:08.080 [2024-07-12 09:36:53.955078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:53.955467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:53.955551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:08.080 [2024-07-12 09:36:53.955605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:31:08.080 [2024-07-12 09:36:53.955789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:53.991842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:53.991892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:08.080 [2024-07-12 09:36:53.991931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.971 ms 00:31:08.080 [2024-07-12 09:36:53.991943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:53.992044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:53.992077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.080 [2024-07-12 09:36:53.992091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:08.080 [2024-07-12 09:36:53.992103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:53.992527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:53.992561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.080 [2024-07-12 09:36:53.992578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:31:08.080 [2024-07-12 09:36:53.992590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:53.992754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:53.992781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.080 [2024-07-12 09:36:53.992797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:31:08.080 [2024-07-12 09:36:53.992809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.009245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.009289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.080 [2024-07-12 09:36:54.009323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.406 ms 00:31:08.080 [2024-07-12 09:36:54.009334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.022109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:08.080 [2024-07-12 09:36:54.025067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.025101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:08.080 [2024-07-12 09:36:54.025148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.643 ms 00:31:08.080 [2024-07-12 09:36:54.025161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.090412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.090495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:08.080 [2024-07-12 09:36:54.090516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.217 ms 00:31:08.080 [2024-07-12 09:36:54.090529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.090772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.090795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:08.080 [2024-07-12 09:36:54.090809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:31:08.080 [2024-07-12 09:36:54.090825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.123413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.123473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:08.080 [2024-07-12 09:36:54.123491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.524 ms 00:31:08.080 [2024-07-12 09:36:54.123503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.151482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.151542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:08.080 [2024-07-12 09:36:54.151560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.932 ms 00:31:08.080 [2024-07-12 09:36:54.151572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.152395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.152429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:08.080 [2024-07-12 09:36:54.152459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:31:08.080 [2024-07-12 09:36:54.152474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.233388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.233448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:08.080 [2024-07-12 09:36:54.233470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.856 ms 00:31:08.080 [2024-07-12 09:36:54.233489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.265132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.265193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:08.080 [2024-07-12 09:36:54.265226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.589 ms 00:31:08.080 [2024-07-12 09:36:54.265256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.295434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.295491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:08.080 [2024-07-12 09:36:54.295507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.100 ms 00:31:08.080 [2024-07-12 09:36:54.295519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.323109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.323182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:08.080 [2024-07-12 09:36:54.323214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.545 ms 00:31:08.080 [2024-07-12 09:36:54.323246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.323313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.323352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:08.080 [2024-07-12 09:36:54.323364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:08.080 [2024-07-12 09:36:54.323378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.323481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.080 [2024-07-12 09:36:54.323505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:08.080 [2024-07-12 09:36:54.323517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:08.080 [2024-07-12 09:36:54.323529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.080 [2024-07-12 09:36:54.324718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2553.465 ms, result 0 00:31:08.080 { 00:31:08.080 "name": "ftl0", 00:31:08.080 "uuid": "b198a0e0-0f51-42a0-ac11-889a0fc09615" 00:31:08.080 } 00:31:08.080 09:36:54 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:08.080 09:36:54 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:08.339 09:36:54 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:08.339 09:36:54 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:08.596 [2024-07-12 09:36:54.920347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.596 [2024-07-12 09:36:54.920404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:08.596 [2024-07-12 09:36:54.920443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:08.596 [2024-07-12 09:36:54.920454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.596 [2024-07-12 09:36:54.920491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:08.596 [2024-07-12 09:36:54.923475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.596 [2024-07-12 09:36:54.923523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:08.596 [2024-07-12 09:36:54.923537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.962 ms 00:31:08.596 [2024-07-12 09:36:54.923549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.596 [2024-07-12 09:36:54.923861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.596 [2024-07-12 09:36:54.923892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:08.596 [2024-07-12 09:36:54.923916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:31:08.596 [2024-07-12 09:36:54.923930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.596 [2024-07-12 09:36:54.927060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.596 [2024-07-12 09:36:54.927091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:08.596 [2024-07-12 09:36:54.927120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:31:08.596 [2024-07-12 09:36:54.927132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.596 [2024-07-12 09:36:54.933302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.596 [2024-07-12 09:36:54.933335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:08.596 [2024-07-12 09:36:54.933351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:31:08.596 [2024-07-12 09:36:54.933363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:54.961872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:54.961931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:08.856 [2024-07-12 09:36:54.961948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.433 ms 00:31:08.856 [2024-07-12 09:36:54.961960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:54.980929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:54.981122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:08.856 [2024-07-12 09:36:54.981270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.925 ms 00:31:08.856 [2024-07-12 09:36:54.981301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:54.981525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:54.981561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:08.856 [2024-07-12 09:36:54.981577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:31:08.856 [2024-07-12 09:36:54.981591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:55.009483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:55.009540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:08.856 [2024-07-12 09:36:55.009556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.866 ms 00:31:08.856 [2024-07-12 09:36:55.009567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:55.036208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:55.036292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:08.856 [2024-07-12 09:36:55.036310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.599 ms 00:31:08.856 [2024-07-12 09:36:55.036323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:55.062878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:55.062936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:08.856 [2024-07-12 09:36:55.062952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.512 ms 00:31:08.856 [2024-07-12 09:36:55.062963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:55.090310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.856 [2024-07-12 09:36:55.090369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:08.856 [2024-07-12 09:36:55.090401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.239 ms 00:31:08.856 [2024-07-12 09:36:55.090413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.856 [2024-07-12 09:36:55.090485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:08.856 [2024-07-12 09:36:55.090509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:08.856 [2024-07-12 09:36:55.090758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.090993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:08.857 [2024-07-12 09:36:55.091872] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:08.857 [2024-07-12 09:36:55.091884] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b198a0e0-0f51-42a0-ac11-889a0fc09615 00:31:08.857 [2024-07-12 09:36:55.091913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:08.857 [2024-07-12 09:36:55.091925] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:08.857 [2024-07-12 09:36:55.091939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:08.857 [2024-07-12 09:36:55.091965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:08.857 [2024-07-12 09:36:55.091993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:08.857 [2024-07-12 09:36:55.092019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:08.857 [2024-07-12 09:36:55.092030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:08.858 [2024-07-12 09:36:55.092054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:08.858 [2024-07-12 09:36:55.092095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:08.858 [2024-07-12 09:36:55.092105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.858 [2024-07-12 09:36:55.092117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:08.858 [2024-07-12 09:36:55.092127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:31:08.858 [2024-07-12 09:36:55.092141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.858 [2024-07-12 09:36:55.107388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.858 [2024-07-12 09:36:55.107448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:08.858 [2024-07-12 09:36:55.107467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.129 ms 00:31:08.858 [2024-07-12 09:36:55.107481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.858 [2024-07-12 09:36:55.107918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.858 [2024-07-12 09:36:55.107954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:08.858 [2024-07-12 09:36:55.107973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:31:08.858 [2024-07-12 09:36:55.107990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.858 [2024-07-12 09:36:55.161326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.858 [2024-07-12 09:36:55.161394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.858 [2024-07-12 09:36:55.161411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.858 [2024-07-12 09:36:55.161423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.858 [2024-07-12 09:36:55.161497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.858 [2024-07-12 09:36:55.161514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.858 [2024-07-12 09:36:55.161527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.858 [2024-07-12 09:36:55.161539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.858 [2024-07-12 09:36:55.161640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.858 [2024-07-12 09:36:55.161663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.858 [2024-07-12 09:36:55.161674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.858 [2024-07-12 09:36:55.161686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.858 [2024-07-12 09:36:55.161708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.858 [2024-07-12 09:36:55.161725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.858 [2024-07-12 09:36:55.161735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.858 [2024-07-12 09:36:55.161749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.248902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.248975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.116 [2024-07-12 09:36:55.248993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.249006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.324329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.324404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.116 [2024-07-12 09:36:55.324422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.324438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.324538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.324560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:09.116 [2024-07-12 09:36:55.324572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.324584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.324639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.324661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:09.116 [2024-07-12 09:36:55.324672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.324684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.324795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.324817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:09.116 [2024-07-12 09:36:55.324829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.324841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.324891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.324910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:09.116 [2024-07-12 09:36:55.324922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.324934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.324980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.324997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:09.116 [2024-07-12 09:36:55.325007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.325019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.325070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.116 [2024-07-12 09:36:55.325091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:09.116 [2024-07-12 09:36:55.325103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.116 [2024-07-12 09:36:55.325114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.116 [2024-07-12 09:36:55.325297] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.878 ms, result 0 00:31:09.116 true 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 87045 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 87045 ']' 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 87045 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87045 00:31:09.116 killing process with pid 87045 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87045' 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 87045 00:31:09.116 09:36:55 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 87045 00:31:14.417 09:36:59 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:18.615 262144+0 records in 00:31:18.615 262144+0 records out 00:31:18.615 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.35954 s, 246 MB/s 00:31:18.615 09:37:04 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:20.519 09:37:06 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:20.519 [2024-07-12 09:37:06.471137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:20.519 [2024-07-12 09:37:06.471311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87263 ] 00:31:20.519 [2024-07-12 09:37:06.625311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.519 [2024-07-12 09:37:06.798861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.777 [2024-07-12 09:37:07.095705] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:20.777 [2024-07-12 09:37:07.095787] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:21.037 [2024-07-12 09:37:07.254420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.254476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:21.037 [2024-07-12 09:37:07.254512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:21.037 [2024-07-12 09:37:07.254523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.254596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.254633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:21.037 [2024-07-12 09:37:07.254646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:21.037 [2024-07-12 09:37:07.254661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.254693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:21.037 [2024-07-12 09:37:07.255576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:21.037 [2024-07-12 09:37:07.255654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.255679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:21.037 [2024-07-12 09:37:07.255693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:31:21.037 [2024-07-12 09:37:07.255705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.256935] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:21.037 [2024-07-12 09:37:07.273579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.273648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:21.037 [2024-07-12 09:37:07.273683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.660 ms 00:31:21.037 [2024-07-12 09:37:07.273694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.273763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.273783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:21.037 [2024-07-12 09:37:07.273800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:21.037 [2024-07-12 09:37:07.273811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.278716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.278757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:21.037 [2024-07-12 09:37:07.278789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:31:21.037 [2024-07-12 09:37:07.278800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.278886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.278908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:21.037 [2024-07-12 09:37:07.278919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:21.037 [2024-07-12 09:37:07.278930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.278998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.279016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:21.037 [2024-07-12 09:37:07.279028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:21.037 [2024-07-12 09:37:07.279038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.279070] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:21.037 [2024-07-12 09:37:07.283346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.283413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:21.037 [2024-07-12 09:37:07.283446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:31:21.037 [2024-07-12 09:37:07.283457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.283504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.283522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:21.037 [2024-07-12 09:37:07.283534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:21.037 [2024-07-12 09:37:07.283546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.283601] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:21.037 [2024-07-12 09:37:07.283677] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:21.037 [2024-07-12 09:37:07.283722] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:21.037 [2024-07-12 09:37:07.283747] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:21.037 [2024-07-12 09:37:07.283854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:21.037 [2024-07-12 09:37:07.283870] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:21.037 [2024-07-12 09:37:07.283885] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:21.037 [2024-07-12 09:37:07.283900] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:21.037 [2024-07-12 09:37:07.283915] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:21.037 [2024-07-12 09:37:07.283928] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:21.037 [2024-07-12 09:37:07.283940] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:21.037 [2024-07-12 09:37:07.283951] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:21.037 [2024-07-12 09:37:07.283962] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:21.037 [2024-07-12 09:37:07.283975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.283992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:21.037 [2024-07-12 09:37:07.284005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:31:21.037 [2024-07-12 09:37:07.284017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.284121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.037 [2024-07-12 09:37:07.284137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:21.037 [2024-07-12 09:37:07.284150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:21.037 [2024-07-12 09:37:07.284161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.037 [2024-07-12 09:37:07.284377] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:21.037 [2024-07-12 09:37:07.284399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:21.037 [2024-07-12 09:37:07.284418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:21.037 [2024-07-12 09:37:07.284454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:21.037 [2024-07-12 09:37:07.284487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:21.037 [2024-07-12 09:37:07.284509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:21.037 [2024-07-12 09:37:07.284520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:21.037 [2024-07-12 09:37:07.284530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:21.037 [2024-07-12 09:37:07.284541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:21.037 [2024-07-12 09:37:07.284552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:21.037 [2024-07-12 09:37:07.284564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:21.037 [2024-07-12 09:37:07.284586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:21.037 [2024-07-12 09:37:07.284649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:21.037 [2024-07-12 09:37:07.284683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:21.037 [2024-07-12 09:37:07.284716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:21.037 [2024-07-12 09:37:07.284749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.037 [2024-07-12 09:37:07.284774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:21.037 [2024-07-12 09:37:07.284785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:21.037 [2024-07-12 09:37:07.284796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:21.037 [2024-07-12 09:37:07.284807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:21.037 [2024-07-12 09:37:07.284819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:21.037 [2024-07-12 09:37:07.284829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:21.037 [2024-07-12 09:37:07.284840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:21.038 [2024-07-12 09:37:07.284851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:21.038 [2024-07-12 09:37:07.284862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.038 [2024-07-12 09:37:07.284873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:21.038 [2024-07-12 09:37:07.284884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:21.038 [2024-07-12 09:37:07.284895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.038 [2024-07-12 09:37:07.284906] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:21.038 [2024-07-12 09:37:07.284917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:21.038 [2024-07-12 09:37:07.284929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:21.038 [2024-07-12 09:37:07.284941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.038 [2024-07-12 09:37:07.284954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:21.038 [2024-07-12 09:37:07.284966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:21.038 [2024-07-12 09:37:07.284977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:21.038 [2024-07-12 09:37:07.284990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:21.038 [2024-07-12 09:37:07.285001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:21.038 [2024-07-12 09:37:07.285012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:21.038 [2024-07-12 09:37:07.285024] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:21.038 [2024-07-12 09:37:07.285039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:21.038 [2024-07-12 09:37:07.285066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:21.038 [2024-07-12 09:37:07.285078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:21.038 [2024-07-12 09:37:07.285090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:21.038 [2024-07-12 09:37:07.285102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:21.038 [2024-07-12 09:37:07.285114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:21.038 [2024-07-12 09:37:07.285126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:21.038 [2024-07-12 09:37:07.285137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:21.038 [2024-07-12 09:37:07.285149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:21.038 [2024-07-12 09:37:07.285161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:21.038 [2024-07-12 09:37:07.285220] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:21.038 [2024-07-12 09:37:07.285233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:21.038 [2024-07-12 09:37:07.285272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:21.038 [2024-07-12 09:37:07.285286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:21.038 [2024-07-12 09:37:07.285298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:21.038 [2024-07-12 09:37:07.285311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.285330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:21.038 [2024-07-12 09:37:07.285343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:31:21.038 [2024-07-12 09:37:07.285355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.325103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.325162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:21.038 [2024-07-12 09:37:07.325226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.685 ms 00:31:21.038 [2024-07-12 09:37:07.325258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.325409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.325428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:21.038 [2024-07-12 09:37:07.325442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:31:21.038 [2024-07-12 09:37:07.325453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.360391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.360439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:21.038 [2024-07-12 09:37:07.360456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.846 ms 00:31:21.038 [2024-07-12 09:37:07.360468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.360521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.360537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:21.038 [2024-07-12 09:37:07.360550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:21.038 [2024-07-12 09:37:07.360560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.360954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.360972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:21.038 [2024-07-12 09:37:07.360983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:31:21.038 [2024-07-12 09:37:07.360992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.361126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.361144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:21.038 [2024-07-12 09:37:07.361155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:21.038 [2024-07-12 09:37:07.361165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.038 [2024-07-12 09:37:07.377563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.038 [2024-07-12 09:37:07.377622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:21.038 [2024-07-12 09:37:07.377641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.375 ms 00:31:21.038 [2024-07-12 09:37:07.377653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.395086] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:21.298 [2024-07-12 09:37:07.395134] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:21.298 [2024-07-12 09:37:07.395175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.395188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:21.298 [2024-07-12 09:37:07.395217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.379 ms 00:31:21.298 [2024-07-12 09:37:07.395232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.428673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.428712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:21.298 [2024-07-12 09:37:07.428744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.392 ms 00:31:21.298 [2024-07-12 09:37:07.428755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.443894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.443951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:21.298 [2024-07-12 09:37:07.443996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.087 ms 00:31:21.298 [2024-07-12 09:37:07.444038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.459566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.459641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:21.298 [2024-07-12 09:37:07.459689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.456 ms 00:31:21.298 [2024-07-12 09:37:07.459701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.460551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.460591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:21.298 [2024-07-12 09:37:07.460638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:31:21.298 [2024-07-12 09:37:07.460651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.529185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.529276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:21.298 [2024-07-12 09:37:07.529312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.509 ms 00:31:21.298 [2024-07-12 09:37:07.529324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.540917] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:21.298 [2024-07-12 09:37:07.543269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.543330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:21.298 [2024-07-12 09:37:07.543363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.853 ms 00:31:21.298 [2024-07-12 09:37:07.543373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.543464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.543483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:21.298 [2024-07-12 09:37:07.543496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:21.298 [2024-07-12 09:37:07.543506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.543588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.543606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:21.298 [2024-07-12 09:37:07.543651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:21.298 [2024-07-12 09:37:07.543680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.543727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.543742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:21.298 [2024-07-12 09:37:07.543754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:21.298 [2024-07-12 09:37:07.543764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.543804] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:21.298 [2024-07-12 09:37:07.543822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.543833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:21.298 [2024-07-12 09:37:07.543845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:21.298 [2024-07-12 09:37:07.543860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.575840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.575888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:21.298 [2024-07-12 09:37:07.575907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.956 ms 00:31:21.298 [2024-07-12 09:37:07.575920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.576017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.298 [2024-07-12 09:37:07.576065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:21.298 [2024-07-12 09:37:07.576087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:21.298 [2024-07-12 09:37:07.576098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.298 [2024-07-12 09:37:07.577485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.537 ms, result 0 00:32:02.585  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 96/1024 [MB] (23 MBps) Copying: 120/1024 [MB] (24 MBps) Copying: 144/1024 [MB] (23 MBps) Copying: 167/1024 [MB] (23 MBps) Copying: 192/1024 [MB] (24 MBps) Copying: 215/1024 [MB] (23 MBps) Copying: 239/1024 [MB] (23 MBps) Copying: 262/1024 [MB] (22 MBps) Copying: 287/1024 [MB] (24 MBps) Copying: 313/1024 [MB] (25 MBps) Copying: 338/1024 [MB] (24 MBps) Copying: 363/1024 [MB] (25 MBps) Copying: 389/1024 [MB] (26 MBps) Copying: 416/1024 [MB] (26 MBps) Copying: 442/1024 [MB] (26 MBps) Copying: 468/1024 [MB] (25 MBps) Copying: 493/1024 [MB] (24 MBps) Copying: 519/1024 [MB] (25 MBps) Copying: 545/1024 [MB] (26 MBps) Copying: 570/1024 [MB] (25 MBps) Copying: 594/1024 [MB] (24 MBps) Copying: 620/1024 [MB] (25 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 672/1024 [MB] (26 MBps) Copying: 698/1024 [MB] (25 MBps) Copying: 723/1024 [MB] (25 MBps) Copying: 748/1024 [MB] (24 MBps) Copying: 773/1024 [MB] (24 MBps) Copying: 798/1024 [MB] (25 MBps) Copying: 824/1024 [MB] (25 MBps) Copying: 848/1024 [MB] (24 MBps) Copying: 872/1024 [MB] (24 MBps) Copying: 898/1024 [MB] (25 MBps) Copying: 923/1024 [MB] (25 MBps) Copying: 947/1024 [MB] (24 MBps) Copying: 971/1024 [MB] (24 MBps) Copying: 997/1024 [MB] (25 MBps) Copying: 1022/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-12 09:37:48.644105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.585 [2024-07-12 09:37:48.644219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:02.585 [2024-07-12 09:37:48.644259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:02.585 [2024-07-12 09:37:48.644271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.585 [2024-07-12 09:37:48.644301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:02.585 [2024-07-12 09:37:48.647446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.585 [2024-07-12 09:37:48.647480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:02.585 [2024-07-12 09:37:48.647496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:32:02.585 [2024-07-12 09:37:48.647507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.585 [2024-07-12 09:37:48.649069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.585 [2024-07-12 09:37:48.649125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:02.585 [2024-07-12 09:37:48.649165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:32:02.585 [2024-07-12 09:37:48.649176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.585 [2024-07-12 09:37:48.649223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.585 [2024-07-12 09:37:48.649241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:02.585 [2024-07-12 09:37:48.649253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:02.585 [2024-07-12 09:37:48.649264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.585 [2024-07-12 09:37:48.649316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.585 [2024-07-12 09:37:48.649333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:02.585 [2024-07-12 09:37:48.649344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:02.585 [2024-07-12 09:37:48.649359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.585 [2024-07-12 09:37:48.649379] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:02.585 [2024-07-12 09:37:48.649396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.649991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:02.586 [2024-07-12 09:37:48.650313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:02.587 [2024-07-12 09:37:48.650686] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:02.587 [2024-07-12 09:37:48.650698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b198a0e0-0f51-42a0-ac11-889a0fc09615 00:32:02.587 [2024-07-12 09:37:48.650710] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:02.587 [2024-07-12 09:37:48.650721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:02.587 [2024-07-12 09:37:48.650732] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:02.587 [2024-07-12 09:37:48.650743] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:02.587 [2024-07-12 09:37:48.650754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:02.587 [2024-07-12 09:37:48.650765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:02.587 [2024-07-12 09:37:48.650782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:02.587 [2024-07-12 09:37:48.650793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:02.587 [2024-07-12 09:37:48.650803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:02.587 [2024-07-12 09:37:48.650814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.587 [2024-07-12 09:37:48.650825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:02.587 [2024-07-12 09:37:48.650837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:32:02.587 [2024-07-12 09:37:48.650848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.666866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.587 [2024-07-12 09:37:48.666908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:02.587 [2024-07-12 09:37:48.666942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.995 ms 00:32:02.587 [2024-07-12 09:37:48.666968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.667435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.587 [2024-07-12 09:37:48.667482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:02.587 [2024-07-12 09:37:48.667497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:32:02.587 [2024-07-12 09:37:48.667508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.701705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.701753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:02.587 [2024-07-12 09:37:48.701787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.701804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.701874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.701889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:02.587 [2024-07-12 09:37:48.701900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.701911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.702002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.702022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:02.587 [2024-07-12 09:37:48.702034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.702044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.702072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.702086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:02.587 [2024-07-12 09:37:48.702096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.702107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.794320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.794378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:02.587 [2024-07-12 09:37:48.794413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.794424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.875320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.875380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:02.587 [2024-07-12 09:37:48.875400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.875412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.875506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.875525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:02.587 [2024-07-12 09:37:48.875538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.875550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.875599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.875623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:02.587 [2024-07-12 09:37:48.875647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.875660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.875758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.875778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:02.587 [2024-07-12 09:37:48.875791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.875804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.875846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.587 [2024-07-12 09:37:48.875865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:02.587 [2024-07-12 09:37:48.875884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.587 [2024-07-12 09:37:48.875898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.587 [2024-07-12 09:37:48.875945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.588 [2024-07-12 09:37:48.875962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:02.588 [2024-07-12 09:37:48.875974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.588 [2024-07-12 09:37:48.875986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.588 [2024-07-12 09:37:48.876050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.588 [2024-07-12 09:37:48.876084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:02.588 [2024-07-12 09:37:48.876097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.588 [2024-07-12 09:37:48.876109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.588 [2024-07-12 09:37:48.876279] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 232.118 ms, result 0 00:32:03.961 00:32:03.961 00:32:03.961 09:37:50 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:03.961 [2024-07-12 09:37:50.176127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:03.962 [2024-07-12 09:37:50.176361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87688 ] 00:32:04.221 [2024-07-12 09:37:50.344532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.221 [2024-07-12 09:37:50.522118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.480 [2024-07-12 09:37:50.817252] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:04.480 [2024-07-12 09:37:50.817352] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:04.740 [2024-07-12 09:37:50.976758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.976822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:04.740 [2024-07-12 09:37:50.976858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.740 [2024-07-12 09:37:50.976869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.976940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.976959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.740 [2024-07-12 09:37:50.976971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:04.740 [2024-07-12 09:37:50.976985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.977013] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:04.740 [2024-07-12 09:37:50.978047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:04.740 [2024-07-12 09:37:50.978108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.978127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.740 [2024-07-12 09:37:50.978140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:32:04.740 [2024-07-12 09:37:50.978151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.978603] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:04.740 [2024-07-12 09:37:50.978635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.978648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:04.740 [2024-07-12 09:37:50.978661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:04.740 [2024-07-12 09:37:50.978677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.978733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.978764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:04.740 [2024-07-12 09:37:50.978792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:04.740 [2024-07-12 09:37:50.978802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.979209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.979265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.740 [2024-07-12 09:37:50.979278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:32:04.740 [2024-07-12 09:37:50.979294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.979372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.979390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.740 [2024-07-12 09:37:50.979401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:04.740 [2024-07-12 09:37:50.979411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.979445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.979460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:04.740 [2024-07-12 09:37:50.979471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:04.740 [2024-07-12 09:37:50.979482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.979513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:04.740 [2024-07-12 09:37:50.984097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.984372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.740 [2024-07-12 09:37:50.984516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.589 ms 00:32:04.740 [2024-07-12 09:37:50.984707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.984805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.984952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:04.740 [2024-07-12 09:37:50.985068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:04.740 [2024-07-12 09:37:50.985117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.740 [2024-07-12 09:37:50.985250] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:04.740 [2024-07-12 09:37:50.985324] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:04.740 [2024-07-12 09:37:50.985528] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:04.740 [2024-07-12 09:37:50.985675] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:04.740 [2024-07-12 09:37:50.985783] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:04.740 [2024-07-12 09:37:50.985800] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:04.740 [2024-07-12 09:37:50.985815] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:04.740 [2024-07-12 09:37:50.985829] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:04.740 [2024-07-12 09:37:50.985842] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:04.740 [2024-07-12 09:37:50.985868] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:04.740 [2024-07-12 09:37:50.985880] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:04.740 [2024-07-12 09:37:50.985890] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:04.740 [2024-07-12 09:37:50.985907] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:04.740 [2024-07-12 09:37:50.985920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.740 [2024-07-12 09:37:50.985931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:04.740 [2024-07-12 09:37:50.985943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:32:04.741 [2024-07-12 09:37:50.985954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:50.986053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:50.986069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:04.741 [2024-07-12 09:37:50.986080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:04.741 [2024-07-12 09:37:50.986091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:50.986213] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:04.741 [2024-07-12 09:37:50.986250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:04.741 [2024-07-12 09:37:50.986263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:04.741 [2024-07-12 09:37:50.986295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:04.741 [2024-07-12 09:37:50.986326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:04.741 [2024-07-12 09:37:50.986347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:04.741 [2024-07-12 09:37:50.986357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:04.741 [2024-07-12 09:37:50.986367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:04.741 [2024-07-12 09:37:50.986377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:04.741 [2024-07-12 09:37:50.986387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:04.741 [2024-07-12 09:37:50.986397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:04.741 [2024-07-12 09:37:50.986419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:04.741 [2024-07-12 09:37:50.986451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:04.741 [2024-07-12 09:37:50.986496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:04.741 [2024-07-12 09:37:50.986533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:04.741 [2024-07-12 09:37:50.986579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:04.741 [2024-07-12 09:37:50.986608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:04.741 [2024-07-12 09:37:50.986628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:04.741 [2024-07-12 09:37:50.986637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:04.741 [2024-07-12 09:37:50.986647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:04.741 [2024-07-12 09:37:50.986656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:04.741 [2024-07-12 09:37:50.986666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:04.741 [2024-07-12 09:37:50.986676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:04.741 [2024-07-12 09:37:50.986695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:04.741 [2024-07-12 09:37:50.986705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986714] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:04.741 [2024-07-12 09:37:50.986725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:04.741 [2024-07-12 09:37:50.986735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:04.741 [2024-07-12 09:37:50.986770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:04.741 [2024-07-12 09:37:50.986780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:04.741 [2024-07-12 09:37:50.986789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:04.741 [2024-07-12 09:37:50.986799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:04.741 [2024-07-12 09:37:50.986808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:04.741 [2024-07-12 09:37:50.986818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:04.741 [2024-07-12 09:37:50.986830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:04.741 [2024-07-12 09:37:50.986843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.986855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:04.741 [2024-07-12 09:37:50.986866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:04.741 [2024-07-12 09:37:50.986880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:04.741 [2024-07-12 09:37:50.986891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:04.741 [2024-07-12 09:37:50.986902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:04.741 [2024-07-12 09:37:50.986912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:04.741 [2024-07-12 09:37:50.986923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:04.741 [2024-07-12 09:37:50.986933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:04.741 [2024-07-12 09:37:50.986943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:04.741 [2024-07-12 09:37:50.986954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.986964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.986975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.986985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.986996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:04.741 [2024-07-12 09:37:50.987007] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:04.741 [2024-07-12 09:37:50.987023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.987035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:04.741 [2024-07-12 09:37:50.987046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:04.741 [2024-07-12 09:37:50.987057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:04.741 [2024-07-12 09:37:50.987068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:04.741 [2024-07-12 09:37:50.987080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:50.987090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:04.741 [2024-07-12 09:37:50.987101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:32:04.741 [2024-07-12 09:37:50.987112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:51.028639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:51.028703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.741 [2024-07-12 09:37:51.028742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.466 ms 00:32:04.741 [2024-07-12 09:37:51.028754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:51.028877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:51.028894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:04.741 [2024-07-12 09:37:51.028906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:04.741 [2024-07-12 09:37:51.028917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:51.068090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:51.068155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.741 [2024-07-12 09:37:51.068193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.047 ms 00:32:04.741 [2024-07-12 09:37:51.068238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:51.068312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:51.068345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.741 [2024-07-12 09:37:51.068364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.741 [2024-07-12 09:37:51.068375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:51.068530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:51.068549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.741 [2024-07-12 09:37:51.068562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:04.741 [2024-07-12 09:37:51.068572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.741 [2024-07-12 09:37:51.068742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.741 [2024-07-12 09:37:51.068769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.741 [2024-07-12 09:37:51.068782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:32:04.741 [2024-07-12 09:37:51.068798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.742 [2024-07-12 09:37:51.085229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.742 [2024-07-12 09:37:51.085279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.742 [2024-07-12 09:37:51.085318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.403 ms 00:32:04.742 [2024-07-12 09:37:51.085330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.742 [2024-07-12 09:37:51.085511] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:04.742 [2024-07-12 09:37:51.085535] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:04.742 [2024-07-12 09:37:51.085549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.742 [2024-07-12 09:37:51.085560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:04.742 [2024-07-12 09:37:51.085572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:32:04.742 [2024-07-12 09:37:51.085582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.001 [2024-07-12 09:37:51.099837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.001 [2024-07-12 09:37:51.099878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:05.001 [2024-07-12 09:37:51.099895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.229 ms 00:32:05.001 [2024-07-12 09:37:51.099907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.001 [2024-07-12 09:37:51.100076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.001 [2024-07-12 09:37:51.100093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:05.001 [2024-07-12 09:37:51.100105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:32:05.001 [2024-07-12 09:37:51.100115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.100177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.100194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:05.002 [2024-07-12 09:37:51.100212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:05.002 [2024-07-12 09:37:51.100222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.101016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.101051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:05.002 [2024-07-12 09:37:51.101066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:32:05.002 [2024-07-12 09:37:51.101078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.101101] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:05.002 [2024-07-12 09:37:51.101118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.101142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:05.002 [2024-07-12 09:37:51.101158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:05.002 [2024-07-12 09:37:51.101169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.114860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:05.002 [2024-07-12 09:37:51.115163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.115186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:05.002 [2024-07-12 09:37:51.115200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.945 ms 00:32:05.002 [2024-07-12 09:37:51.115212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.117544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.117608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:05.002 [2024-07-12 09:37:51.117638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:32:05.002 [2024-07-12 09:37:51.117654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.117771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.117805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:05.002 [2024-07-12 09:37:51.117820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:05.002 [2024-07-12 09:37:51.117832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.117866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.117881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:05.002 [2024-07-12 09:37:51.117894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:05.002 [2024-07-12 09:37:51.117904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.117948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:05.002 [2024-07-12 09:37:51.117967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.117979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:05.002 [2024-07-12 09:37:51.117991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:05.002 [2024-07-12 09:37:51.118002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.150032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.150091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:05.002 [2024-07-12 09:37:51.150124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.005 ms 00:32:05.002 [2024-07-12 09:37:51.150143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.150235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.002 [2024-07-12 09:37:51.150256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:05.002 [2024-07-12 09:37:51.150268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:05.002 [2024-07-12 09:37:51.150279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.002 [2024-07-12 09:37:51.151466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 174.192 ms, result 0 00:32:45.882  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (23 MBps) Copying: 96/1024 [MB] (23 MBps) Copying: 121/1024 [MB] (24 MBps) Copying: 146/1024 [MB] (25 MBps) Copying: 170/1024 [MB] (24 MBps) Copying: 194/1024 [MB] (23 MBps) Copying: 218/1024 [MB] (23 MBps) Copying: 242/1024 [MB] (23 MBps) Copying: 265/1024 [MB] (23 MBps) Copying: 289/1024 [MB] (24 MBps) Copying: 315/1024 [MB] (25 MBps) Copying: 342/1024 [MB] (26 MBps) Copying: 369/1024 [MB] (27 MBps) Copying: 397/1024 [MB] (27 MBps) Copying: 421/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (25 MBps) Copying: 473/1024 [MB] (26 MBps) Copying: 497/1024 [MB] (24 MBps) Copying: 523/1024 [MB] (25 MBps) Copying: 548/1024 [MB] (25 MBps) Copying: 575/1024 [MB] (27 MBps) Copying: 601/1024 [MB] (25 MBps) Copying: 627/1024 [MB] (26 MBps) Copying: 654/1024 [MB] (26 MBps) Copying: 679/1024 [MB] (25 MBps) Copying: 705/1024 [MB] (26 MBps) Copying: 732/1024 [MB] (26 MBps) Copying: 759/1024 [MB] (26 MBps) Copying: 785/1024 [MB] (25 MBps) Copying: 810/1024 [MB] (25 MBps) Copying: 834/1024 [MB] (23 MBps) Copying: 860/1024 [MB] (26 MBps) Copying: 885/1024 [MB] (24 MBps) Copying: 912/1024 [MB] (26 MBps) Copying: 938/1024 [MB] (25 MBps) Copying: 963/1024 [MB] (25 MBps) Copying: 989/1024 [MB] (26 MBps) Copying: 1013/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-12 09:38:32.001346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.883 [2024-07-12 09:38:32.001435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:45.883 [2024-07-12 09:38:32.001471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:45.883 [2024-07-12 09:38:32.001486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.883 [2024-07-12 09:38:32.001523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:45.883 [2024-07-12 09:38:32.007446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.883 [2024-07-12 09:38:32.007526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:45.883 [2024-07-12 09:38:32.007558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.891 ms 00:32:45.883 [2024-07-12 09:38:32.007573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.883 [2024-07-12 09:38:32.007925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.883 [2024-07-12 09:38:32.007955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:45.883 [2024-07-12 09:38:32.007982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:32:45.883 [2024-07-12 09:38:32.007996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.883 [2024-07-12 09:38:32.008039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.883 [2024-07-12 09:38:32.008057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:45.883 [2024-07-12 09:38:32.008073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:45.883 [2024-07-12 09:38:32.008094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.883 [2024-07-12 09:38:32.008163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.883 [2024-07-12 09:38:32.008182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:45.883 [2024-07-12 09:38:32.008228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:45.883 [2024-07-12 09:38:32.008243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.883 [2024-07-12 09:38:32.008269] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:45.883 [2024-07-12 09:38:32.008291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.008545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.009991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.010005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:45.883 [2024-07-12 09:38:32.010019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:45.884 [2024-07-12 09:38:32.010504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:45.884 [2024-07-12 09:38:32.010518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b198a0e0-0f51-42a0-ac11-889a0fc09615 00:32:45.884 [2024-07-12 09:38:32.010532] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:45.884 [2024-07-12 09:38:32.010546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:45.884 [2024-07-12 09:38:32.010566] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:45.884 [2024-07-12 09:38:32.010580] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:45.884 [2024-07-12 09:38:32.010593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:45.884 [2024-07-12 09:38:32.010607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:45.884 [2024-07-12 09:38:32.010624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:45.884 [2024-07-12 09:38:32.010637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:45.884 [2024-07-12 09:38:32.010649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:45.884 [2024-07-12 09:38:32.010663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.884 [2024-07-12 09:38:32.010677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:45.884 [2024-07-12 09:38:32.010693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:32:45.884 [2024-07-12 09:38:32.010719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.027271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.884 [2024-07-12 09:38:32.027344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:45.884 [2024-07-12 09:38:32.027363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.517 ms 00:32:45.884 [2024-07-12 09:38:32.027374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.027831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.884 [2024-07-12 09:38:32.027868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:45.884 [2024-07-12 09:38:32.027884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:32:45.884 [2024-07-12 09:38:32.027895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.061836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.884 [2024-07-12 09:38:32.061899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.884 [2024-07-12 09:38:32.061931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.884 [2024-07-12 09:38:32.061942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.062005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.884 [2024-07-12 09:38:32.062020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.884 [2024-07-12 09:38:32.062030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.884 [2024-07-12 09:38:32.062040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.062113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.884 [2024-07-12 09:38:32.062138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.884 [2024-07-12 09:38:32.062165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.884 [2024-07-12 09:38:32.062191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.062211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.884 [2024-07-12 09:38:32.062224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.884 [2024-07-12 09:38:32.062255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.884 [2024-07-12 09:38:32.062266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.884 [2024-07-12 09:38:32.153505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.884 [2024-07-12 09:38:32.153598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.884 [2024-07-12 09:38:32.153632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.884 [2024-07-12 09:38:32.153660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:46.143 [2024-07-12 09:38:32.240253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:46.143 [2024-07-12 09:38:32.240394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:46.143 [2024-07-12 09:38:32.240480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:46.143 [2024-07-12 09:38:32.240637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:46.143 [2024-07-12 09:38:32.240719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:46.143 [2024-07-12 09:38:32.240807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.240871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.143 [2024-07-12 09:38:32.240888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:46.143 [2024-07-12 09:38:32.240900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.143 [2024-07-12 09:38:32.240911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.143 [2024-07-12 09:38:32.241048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 239.675 ms, result 0 00:32:47.078 00:32:47.078 00:32:47.078 09:38:33 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:49.611 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:49.611 09:38:35 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:49.611 [2024-07-12 09:38:35.453860] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:49.611 [2024-07-12 09:38:35.454033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88132 ] 00:32:49.611 [2024-07-12 09:38:35.627355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.611 [2024-07-12 09:38:35.847747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.870 [2024-07-12 09:38:36.176169] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:49.870 [2024-07-12 09:38:36.176283] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:50.130 [2024-07-12 09:38:36.337437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.337506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:50.130 [2024-07-12 09:38:36.337527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:50.130 [2024-07-12 09:38:36.337538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.337620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.337643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:50.130 [2024-07-12 09:38:36.337655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:50.130 [2024-07-12 09:38:36.337670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.337703] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:50.130 [2024-07-12 09:38:36.338707] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:50.130 [2024-07-12 09:38:36.338749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.338767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:50.130 [2024-07-12 09:38:36.338780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:32:50.130 [2024-07-12 09:38:36.338791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.339268] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:50.130 [2024-07-12 09:38:36.339341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.339356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:50.130 [2024-07-12 09:38:36.339369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:32:50.130 [2024-07-12 09:38:36.339386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.339447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.339465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:50.130 [2024-07-12 09:38:36.339477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:50.130 [2024-07-12 09:38:36.339487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.339927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.339962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:50.130 [2024-07-12 09:38:36.339977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:32:50.130 [2024-07-12 09:38:36.339992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.340084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.340104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:50.130 [2024-07-12 09:38:36.340116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:50.130 [2024-07-12 09:38:36.340127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.340169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.340202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:50.130 [2024-07-12 09:38:36.340217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:50.130 [2024-07-12 09:38:36.340228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.340265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:50.130 [2024-07-12 09:38:36.344795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.344835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:50.130 [2024-07-12 09:38:36.344855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:32:50.130 [2024-07-12 09:38:36.344866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.344916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.344933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:50.130 [2024-07-12 09:38:36.344945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:50.130 [2024-07-12 09:38:36.344955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.345017] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:50.130 [2024-07-12 09:38:36.345051] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:50.130 [2024-07-12 09:38:36.345093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:50.130 [2024-07-12 09:38:36.345118] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:50.130 [2024-07-12 09:38:36.345241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:50.130 [2024-07-12 09:38:36.345266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:50.130 [2024-07-12 09:38:36.345282] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:50.130 [2024-07-12 09:38:36.345297] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345310] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345327] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:50.130 [2024-07-12 09:38:36.345337] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:50.130 [2024-07-12 09:38:36.345348] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:50.130 [2024-07-12 09:38:36.345363] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:50.130 [2024-07-12 09:38:36.345374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.345385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:50.130 [2024-07-12 09:38:36.345397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:32:50.130 [2024-07-12 09:38:36.345407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.345503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.130 [2024-07-12 09:38:36.345519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:50.130 [2024-07-12 09:38:36.345531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:50.130 [2024-07-12 09:38:36.345541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.130 [2024-07-12 09:38:36.345649] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:50.130 [2024-07-12 09:38:36.345666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:50.130 [2024-07-12 09:38:36.345679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:50.130 [2024-07-12 09:38:36.345711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:50.130 [2024-07-12 09:38:36.345742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:50.130 [2024-07-12 09:38:36.345762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:50.130 [2024-07-12 09:38:36.345772] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:50.130 [2024-07-12 09:38:36.345782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:50.130 [2024-07-12 09:38:36.345792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:50.130 [2024-07-12 09:38:36.345802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:50.130 [2024-07-12 09:38:36.345812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:50.130 [2024-07-12 09:38:36.345833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:50.130 [2024-07-12 09:38:36.345862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:50.130 [2024-07-12 09:38:36.345907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:50.130 [2024-07-12 09:38:36.345945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:50.130 [2024-07-12 09:38:36.345975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:50.130 [2024-07-12 09:38:36.345985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:50.130 [2024-07-12 09:38:36.345995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:50.130 [2024-07-12 09:38:36.346005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:50.130 [2024-07-12 09:38:36.346014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:50.130 [2024-07-12 09:38:36.346024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:50.130 [2024-07-12 09:38:36.346034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:50.130 [2024-07-12 09:38:36.346044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:50.131 [2024-07-12 09:38:36.346054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:50.131 [2024-07-12 09:38:36.346064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:50.131 [2024-07-12 09:38:36.346073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:50.131 [2024-07-12 09:38:36.346083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:50.131 [2024-07-12 09:38:36.346093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:50.131 [2024-07-12 09:38:36.346103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:50.131 [2024-07-12 09:38:36.346112] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:50.131 [2024-07-12 09:38:36.346123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:50.131 [2024-07-12 09:38:36.346133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:50.131 [2024-07-12 09:38:36.346143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:50.131 [2024-07-12 09:38:36.346154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:50.131 [2024-07-12 09:38:36.346165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:50.131 [2024-07-12 09:38:36.346175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:50.131 [2024-07-12 09:38:36.346201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:50.131 [2024-07-12 09:38:36.346214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:50.131 [2024-07-12 09:38:36.346225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:50.131 [2024-07-12 09:38:36.346236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:50.131 [2024-07-12 09:38:36.346249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:50.131 [2024-07-12 09:38:36.346274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:50.131 [2024-07-12 09:38:36.346286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:50.131 [2024-07-12 09:38:36.346297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:50.131 [2024-07-12 09:38:36.346307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:50.131 [2024-07-12 09:38:36.346318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:50.131 [2024-07-12 09:38:36.346329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:50.131 [2024-07-12 09:38:36.346340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:50.131 [2024-07-12 09:38:36.346351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:50.131 [2024-07-12 09:38:36.346362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:50.131 [2024-07-12 09:38:36.346416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:50.131 [2024-07-12 09:38:36.346434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:50.131 [2024-07-12 09:38:36.346457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:50.131 [2024-07-12 09:38:36.346468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:50.131 [2024-07-12 09:38:36.346479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:50.131 [2024-07-12 09:38:36.346491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.346502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:50.131 [2024-07-12 09:38:36.346514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:32:50.131 [2024-07-12 09:38:36.346525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.388290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.388353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:50.131 [2024-07-12 09:38:36.388374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.704 ms 00:32:50.131 [2024-07-12 09:38:36.388386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.388514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.388532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:50.131 [2024-07-12 09:38:36.388545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:50.131 [2024-07-12 09:38:36.388556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.428065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.428128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:50.131 [2024-07-12 09:38:36.428148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.407 ms 00:32:50.131 [2024-07-12 09:38:36.428160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.428247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.428267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:50.131 [2024-07-12 09:38:36.428286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:50.131 [2024-07-12 09:38:36.428297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.428451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.428470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:50.131 [2024-07-12 09:38:36.428484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:32:50.131 [2024-07-12 09:38:36.428495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.428647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.428679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:50.131 [2024-07-12 09:38:36.428691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:32:50.131 [2024-07-12 09:38:36.428706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.445090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.445176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:50.131 [2024-07-12 09:38:36.445212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.356 ms 00:32:50.131 [2024-07-12 09:38:36.445224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.445415] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:50.131 [2024-07-12 09:38:36.445440] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:50.131 [2024-07-12 09:38:36.445454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.445466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:50.131 [2024-07-12 09:38:36.445479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:50.131 [2024-07-12 09:38:36.445490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.459933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.459969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:50.131 [2024-07-12 09:38:36.459991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.411 ms 00:32:50.131 [2024-07-12 09:38:36.460002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.460129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.460145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:50.131 [2024-07-12 09:38:36.460157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:32:50.131 [2024-07-12 09:38:36.460168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.460248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.460268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:50.131 [2024-07-12 09:38:36.460288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:50.131 [2024-07-12 09:38:36.460299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.461029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.461057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:50.131 [2024-07-12 09:38:36.461071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:32:50.131 [2024-07-12 09:38:36.461082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.461109] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:50.131 [2024-07-12 09:38:36.461126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.461151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:50.131 [2024-07-12 09:38:36.461166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:50.131 [2024-07-12 09:38:36.461177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.475200] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:50.131 [2024-07-12 09:38:36.475492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.475514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:50.131 [2024-07-12 09:38:36.475530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.260 ms 00:32:50.131 [2024-07-12 09:38:36.475540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.477869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.477907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:50.131 [2024-07-12 09:38:36.477922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:32:50.131 [2024-07-12 09:38:36.477939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.131 [2024-07-12 09:38:36.478062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.131 [2024-07-12 09:38:36.478083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:50.131 [2024-07-12 09:38:36.478096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:50.131 [2024-07-12 09:38:36.478106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.132 [2024-07-12 09:38:36.478145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.132 [2024-07-12 09:38:36.478160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:50.132 [2024-07-12 09:38:36.478172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:50.132 [2024-07-12 09:38:36.478198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.132 [2024-07-12 09:38:36.478249] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:50.132 [2024-07-12 09:38:36.478267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.132 [2024-07-12 09:38:36.478279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:50.132 [2024-07-12 09:38:36.478291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:50.132 [2024-07-12 09:38:36.478301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.390 [2024-07-12 09:38:36.512661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.390 [2024-07-12 09:38:36.512721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:50.390 [2024-07-12 09:38:36.512741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.330 ms 00:32:50.390 [2024-07-12 09:38:36.512761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.390 [2024-07-12 09:38:36.512855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.390 [2024-07-12 09:38:36.512876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:50.390 [2024-07-12 09:38:36.512889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:50.390 [2024-07-12 09:38:36.512899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.390 [2024-07-12 09:38:36.514159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 176.178 ms, result 0 00:33:30.754  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 80/1024 [MB] (27 MBps) Copying: 106/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (25 MBps) Copying: 158/1024 [MB] (27 MBps) Copying: 185/1024 [MB] (26 MBps) Copying: 215/1024 [MB] (29 MBps) Copying: 242/1024 [MB] (27 MBps) Copying: 269/1024 [MB] (26 MBps) Copying: 295/1024 [MB] (26 MBps) Copying: 321/1024 [MB] (25 MBps) Copying: 347/1024 [MB] (26 MBps) Copying: 373/1024 [MB] (25 MBps) Copying: 398/1024 [MB] (24 MBps) Copying: 425/1024 [MB] (27 MBps) Copying: 451/1024 [MB] (25 MBps) Copying: 476/1024 [MB] (25 MBps) Copying: 502/1024 [MB] (25 MBps) Copying: 527/1024 [MB] (25 MBps) Copying: 552/1024 [MB] (25 MBps) Copying: 577/1024 [MB] (24 MBps) Copying: 603/1024 [MB] (26 MBps) Copying: 630/1024 [MB] (26 MBps) Copying: 656/1024 [MB] (26 MBps) Copying: 683/1024 [MB] (27 MBps) Copying: 710/1024 [MB] (26 MBps) Copying: 734/1024 [MB] (24 MBps) Copying: 760/1024 [MB] (25 MBps) Copying: 785/1024 [MB] (25 MBps) Copying: 810/1024 [MB] (25 MBps) Copying: 837/1024 [MB] (26 MBps) Copying: 863/1024 [MB] (26 MBps) Copying: 889/1024 [MB] (25 MBps) Copying: 913/1024 [MB] (24 MBps) Copying: 939/1024 [MB] (25 MBps) Copying: 964/1024 [MB] (25 MBps) Copying: 991/1024 [MB] (26 MBps) Copying: 1018/1024 [MB] (27 MBps) Copying: 1048256/1048576 [kB] (4976 kBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-12 09:39:16.945696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.754 [2024-07-12 09:39:16.945800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:30.754 [2024-07-12 09:39:16.945830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:30.754 [2024-07-12 09:39:16.945843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.755 [2024-07-12 09:39:16.947391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:30.755 [2024-07-12 09:39:16.953458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.755 [2024-07-12 09:39:16.953516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:30.755 [2024-07-12 09:39:16.953532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.004 ms 00:33:30.755 [2024-07-12 09:39:16.953542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.755 [2024-07-12 09:39:16.965410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.755 [2024-07-12 09:39:16.965519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:30.755 [2024-07-12 09:39:16.965565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.029 ms 00:33:30.755 [2024-07-12 09:39:16.965586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.755 [2024-07-12 09:39:16.965622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.755 [2024-07-12 09:39:16.965637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:30.755 [2024-07-12 09:39:16.965648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:30.755 [2024-07-12 09:39:16.965659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.755 [2024-07-12 09:39:16.965717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.755 [2024-07-12 09:39:16.965731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:30.755 [2024-07-12 09:39:16.965742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:30.755 [2024-07-12 09:39:16.965752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.755 [2024-07-12 09:39:16.965791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:30.755 [2024-07-12 09:39:16.965808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:33:30.755 [2024-07-12 09:39:16.965822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.965994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:30.755 [2024-07-12 09:39:16.966790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.966994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.967005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.967022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:30.756 [2024-07-12 09:39:16.967042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:30.756 [2024-07-12 09:39:16.967053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b198a0e0-0f51-42a0-ac11-889a0fc09615 00:33:30.756 [2024-07-12 09:39:16.967065] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:33:30.756 [2024-07-12 09:39:16.967075] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130848 00:33:30.756 [2024-07-12 09:39:16.967085] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:33:30.756 [2024-07-12 09:39:16.967097] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:30.756 [2024-07-12 09:39:16.967107] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:30.756 [2024-07-12 09:39:16.967117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:30.756 [2024-07-12 09:39:16.967128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:30.756 [2024-07-12 09:39:16.967138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:30.756 [2024-07-12 09:39:16.967148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:30.756 [2024-07-12 09:39:16.967158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.756 [2024-07-12 09:39:16.967169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:30.756 [2024-07-12 09:39:16.967196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:33:30.756 [2024-07-12 09:39:16.967210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.756 [2024-07-12 09:39:16.984688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.756 [2024-07-12 09:39:16.984744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:30.756 [2024-07-12 09:39:16.984776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:33:30.756 [2024-07-12 09:39:16.984803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.756 [2024-07-12 09:39:16.985284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.756 [2024-07-12 09:39:16.985311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:30.756 [2024-07-12 09:39:16.985324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:33:30.756 [2024-07-12 09:39:16.985336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.756 [2024-07-12 09:39:17.024011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:30.756 [2024-07-12 09:39:17.024071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:30.756 [2024-07-12 09:39:17.024088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:30.756 [2024-07-12 09:39:17.024099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.756 [2024-07-12 09:39:17.024180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:30.756 [2024-07-12 09:39:17.024209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:30.756 [2024-07-12 09:39:17.024231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:30.756 [2024-07-12 09:39:17.024242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.756 [2024-07-12 09:39:17.024348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:30.756 [2024-07-12 09:39:17.024366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:30.756 [2024-07-12 09:39:17.024377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:30.756 [2024-07-12 09:39:17.024403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.756 [2024-07-12 09:39:17.024445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:30.756 [2024-07-12 09:39:17.024459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:30.756 [2024-07-12 09:39:17.024470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:30.756 [2024-07-12 09:39:17.024480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.015 [2024-07-12 09:39:17.128860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.015 [2024-07-12 09:39:17.128927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:31.015 [2024-07-12 09:39:17.128946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.015 [2024-07-12 09:39:17.128957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.015 [2024-07-12 09:39:17.217972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.015 [2024-07-12 09:39:17.218035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:31.015 [2024-07-12 09:39:17.218054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.015 [2024-07-12 09:39:17.218066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.015 [2024-07-12 09:39:17.218146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.016 [2024-07-12 09:39:17.218163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:31.016 [2024-07-12 09:39:17.218175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.016 [2024-07-12 09:39:17.218200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.016 [2024-07-12 09:39:17.218260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.016 [2024-07-12 09:39:17.218276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:31.016 [2024-07-12 09:39:17.218295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.016 [2024-07-12 09:39:17.218305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.016 [2024-07-12 09:39:17.218413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.016 [2024-07-12 09:39:17.218436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:31.016 [2024-07-12 09:39:17.218447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.016 [2024-07-12 09:39:17.218457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.016 [2024-07-12 09:39:17.218509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.016 [2024-07-12 09:39:17.218525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:31.016 [2024-07-12 09:39:17.218542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.016 [2024-07-12 09:39:17.218553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.016 [2024-07-12 09:39:17.218594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.016 [2024-07-12 09:39:17.218608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:31.016 [2024-07-12 09:39:17.218620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.016 [2024-07-12 09:39:17.218630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.016 [2024-07-12 09:39:17.218680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.016 [2024-07-12 09:39:17.218697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:31.016 [2024-07-12 09:39:17.218713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.016 [2024-07-12 09:39:17.218724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.016 [2024-07-12 09:39:17.218872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 276.518 ms, result 0 00:33:32.393 00:33:32.393 00:33:32.393 09:39:18 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:32.393 [2024-07-12 09:39:18.745428] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:32.393 [2024-07-12 09:39:18.745627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88538 ] 00:33:32.653 [2024-07-12 09:39:18.913409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.931 [2024-07-12 09:39:19.097773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.203 [2024-07-12 09:39:19.403267] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:33.203 [2024-07-12 09:39:19.403368] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:33.463 [2024-07-12 09:39:19.562473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.562550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:33.463 [2024-07-12 09:39:19.562602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:33.463 [2024-07-12 09:39:19.562614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.562687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.562708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:33.463 [2024-07-12 09:39:19.562720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:33.463 [2024-07-12 09:39:19.562735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.562765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:33.463 [2024-07-12 09:39:19.563741] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:33.463 [2024-07-12 09:39:19.563783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.563801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:33.463 [2024-07-12 09:39:19.563813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:33:33.463 [2024-07-12 09:39:19.563824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.564257] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:33.463 [2024-07-12 09:39:19.564298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.564312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:33.463 [2024-07-12 09:39:19.564325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:33.463 [2024-07-12 09:39:19.564343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.564401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.564418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:33.463 [2024-07-12 09:39:19.564430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:33.463 [2024-07-12 09:39:19.564440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.564854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.564883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:33.463 [2024-07-12 09:39:19.564897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:33:33.463 [2024-07-12 09:39:19.564912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.564991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.565018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:33.463 [2024-07-12 09:39:19.565032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:33.463 [2024-07-12 09:39:19.565043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.565080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.565096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:33.463 [2024-07-12 09:39:19.565108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:33.463 [2024-07-12 09:39:19.565133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.565179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:33.463 [2024-07-12 09:39:19.569699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.569750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:33.463 [2024-07-12 09:39:19.569785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:33:33.463 [2024-07-12 09:39:19.569795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.569836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.569852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:33.463 [2024-07-12 09:39:19.569863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:33.463 [2024-07-12 09:39:19.569873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.569941] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:33.463 [2024-07-12 09:39:19.569972] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:33.463 [2024-07-12 09:39:19.570028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:33.463 [2024-07-12 09:39:19.570051] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:33:33.463 [2024-07-12 09:39:19.570153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:33.463 [2024-07-12 09:39:19.570168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:33.463 [2024-07-12 09:39:19.570182] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:33.463 [2024-07-12 09:39:19.570197] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570227] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570239] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:33.463 [2024-07-12 09:39:19.570250] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:33.463 [2024-07-12 09:39:19.570260] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:33.463 [2024-07-12 09:39:19.570276] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:33.463 [2024-07-12 09:39:19.570288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.570299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:33.463 [2024-07-12 09:39:19.570311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:33:33.463 [2024-07-12 09:39:19.570321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.570416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.463 [2024-07-12 09:39:19.570432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:33.463 [2024-07-12 09:39:19.570443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:33:33.463 [2024-07-12 09:39:19.570454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.463 [2024-07-12 09:39:19.570559] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:33.463 [2024-07-12 09:39:19.570576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:33.463 [2024-07-12 09:39:19.570588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:33.463 [2024-07-12 09:39:19.570619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:33.463 [2024-07-12 09:39:19.570650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:33.463 [2024-07-12 09:39:19.570669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:33.463 [2024-07-12 09:39:19.570679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:33.463 [2024-07-12 09:39:19.570689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:33.463 [2024-07-12 09:39:19.570699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:33.463 [2024-07-12 09:39:19.570709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:33.463 [2024-07-12 09:39:19.570718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:33.463 [2024-07-12 09:39:19.570740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:33.463 [2024-07-12 09:39:19.570770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:33.463 [2024-07-12 09:39:19.570813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:33.463 [2024-07-12 09:39:19.570842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:33.463 [2024-07-12 09:39:19.570871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:33.463 [2024-07-12 09:39:19.570890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:33.463 [2024-07-12 09:39:19.570900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:33.463 [2024-07-12 09:39:19.570919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:33.463 [2024-07-12 09:39:19.570929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:33.463 [2024-07-12 09:39:19.570938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:33.463 [2024-07-12 09:39:19.570948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:33.463 [2024-07-12 09:39:19.570958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:33.463 [2024-07-12 09:39:19.570968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.570977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:33.463 [2024-07-12 09:39:19.570988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:33.463 [2024-07-12 09:39:19.570997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.571006] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:33.463 [2024-07-12 09:39:19.571017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:33.463 [2024-07-12 09:39:19.571027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:33.463 [2024-07-12 09:39:19.571038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:33.463 [2024-07-12 09:39:19.571048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:33.463 [2024-07-12 09:39:19.571060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:33.463 [2024-07-12 09:39:19.571069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:33.463 [2024-07-12 09:39:19.571079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:33.464 [2024-07-12 09:39:19.571089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:33.464 [2024-07-12 09:39:19.571099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:33.464 [2024-07-12 09:39:19.571110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:33.464 [2024-07-12 09:39:19.571124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:33.464 [2024-07-12 09:39:19.571146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:33.464 [2024-07-12 09:39:19.571157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:33.464 [2024-07-12 09:39:19.571169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:33.464 [2024-07-12 09:39:19.571180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:33.464 [2024-07-12 09:39:19.571207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:33.464 [2024-07-12 09:39:19.571219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:33.464 [2024-07-12 09:39:19.571229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:33.464 [2024-07-12 09:39:19.571240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:33.464 [2024-07-12 09:39:19.571251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:33.464 [2024-07-12 09:39:19.571306] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:33.464 [2024-07-12 09:39:19.571323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:33.464 [2024-07-12 09:39:19.571346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:33.464 [2024-07-12 09:39:19.571357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:33.464 [2024-07-12 09:39:19.571368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:33.464 [2024-07-12 09:39:19.571380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.571390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:33.464 [2024-07-12 09:39:19.571402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:33:33.464 [2024-07-12 09:39:19.571413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.610505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.610564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:33.464 [2024-07-12 09:39:19.610601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.036 ms 00:33:33.464 [2024-07-12 09:39:19.610612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.610726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.610742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:33.464 [2024-07-12 09:39:19.610754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:33.464 [2024-07-12 09:39:19.610763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.647901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.647965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:33.464 [2024-07-12 09:39:19.647984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.014 ms 00:33:33.464 [2024-07-12 09:39:19.647996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.648079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.648109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:33.464 [2024-07-12 09:39:19.648126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:33.464 [2024-07-12 09:39:19.648136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.648327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.648346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:33.464 [2024-07-12 09:39:19.648358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:33:33.464 [2024-07-12 09:39:19.648369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.648513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.648542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:33.464 [2024-07-12 09:39:19.648555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:33:33.464 [2024-07-12 09:39:19.648570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.664285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.664346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:33.464 [2024-07-12 09:39:19.664367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:33:33.464 [2024-07-12 09:39:19.664377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.664589] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:33.464 [2024-07-12 09:39:19.664613] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:33.464 [2024-07-12 09:39:19.664627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.664638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:33.464 [2024-07-12 09:39:19.664650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:33:33.464 [2024-07-12 09:39:19.664661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.678910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.678976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:33.464 [2024-07-12 09:39:19.678991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.218 ms 00:33:33.464 [2024-07-12 09:39:19.679001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.679123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.679138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:33.464 [2024-07-12 09:39:19.679150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:33:33.464 [2024-07-12 09:39:19.679160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.679229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.679248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:33.464 [2024-07-12 09:39:19.679266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:33.464 [2024-07-12 09:39:19.679276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.680025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.680068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:33.464 [2024-07-12 09:39:19.680082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:33:33.464 [2024-07-12 09:39:19.680093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.680115] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:33.464 [2024-07-12 09:39:19.680131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.680141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:33.464 [2024-07-12 09:39:19.680169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:33.464 [2024-07-12 09:39:19.680180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.693443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:33.464 [2024-07-12 09:39:19.693698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.693724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:33.464 [2024-07-12 09:39:19.693739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.466 ms 00:33:33.464 [2024-07-12 09:39:19.693749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.696200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.696286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:33.464 [2024-07-12 09:39:19.696316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:33:33.464 [2024-07-12 09:39:19.696331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.696413] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:33.464 [2024-07-12 09:39:19.696961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.696993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:33.464 [2024-07-12 09:39:19.697007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:33:33.464 [2024-07-12 09:39:19.697028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.697062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.697076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:33.464 [2024-07-12 09:39:19.697088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:33.464 [2024-07-12 09:39:19.697104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.697141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:33.464 [2024-07-12 09:39:19.697157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.697168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:33.464 [2024-07-12 09:39:19.697179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:33.464 [2024-07-12 09:39:19.697208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.729155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.729211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:33.464 [2024-07-12 09:39:19.729267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.921 ms 00:33:33.464 [2024-07-12 09:39:19.729278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.729369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:33.464 [2024-07-12 09:39:19.729404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:33.464 [2024-07-12 09:39:19.729416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:33.464 [2024-07-12 09:39:19.729426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:33.464 [2024-07-12 09:39:19.738870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 173.402 ms, result 0 00:34:09.901  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (27 MBps) Copying: 84/1024 [MB] (27 MBps) Copying: 112/1024 [MB] (27 MBps) Copying: 140/1024 [MB] (28 MBps) Copying: 167/1024 [MB] (27 MBps) Copying: 193/1024 [MB] (25 MBps) Copying: 218/1024 [MB] (24 MBps) Copying: 246/1024 [MB] (28 MBps) Copying: 274/1024 [MB] (28 MBps) Copying: 303/1024 [MB] (28 MBps) Copying: 329/1024 [MB] (25 MBps) Copying: 356/1024 [MB] (27 MBps) Copying: 382/1024 [MB] (25 MBps) Copying: 415/1024 [MB] (33 MBps) Copying: 442/1024 [MB] (26 MBps) Copying: 471/1024 [MB] (29 MBps) Copying: 501/1024 [MB] (30 MBps) Copying: 532/1024 [MB] (31 MBps) Copying: 562/1024 [MB] (29 MBps) Copying: 591/1024 [MB] (29 MBps) Copying: 620/1024 [MB] (28 MBps) Copying: 651/1024 [MB] (30 MBps) Copying: 681/1024 [MB] (30 MBps) Copying: 712/1024 [MB] (31 MBps) Copying: 743/1024 [MB] (30 MBps) Copying: 770/1024 [MB] (27 MBps) Copying: 800/1024 [MB] (29 MBps) Copying: 831/1024 [MB] (31 MBps) Copying: 858/1024 [MB] (26 MBps) Copying: 884/1024 [MB] (26 MBps) Copying: 915/1024 [MB] (31 MBps) Copying: 944/1024 [MB] (28 MBps) Copying: 969/1024 [MB] (24 MBps) Copying: 995/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-12 09:39:55.992956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.901 [2024-07-12 09:39:55.993063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:09.901 [2024-07-12 09:39:55.993101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:09.901 [2024-07-12 09:39:55.993118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.901 [2024-07-12 09:39:55.993156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:09.901 [2024-07-12 09:39:55.997556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.901 [2024-07-12 09:39:55.997626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:09.901 [2024-07-12 09:39:55.997649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:34:09.901 [2024-07-12 09:39:55.997665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.901 [2024-07-12 09:39:55.997973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.901 [2024-07-12 09:39:55.998010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:09.901 [2024-07-12 09:39:55.998042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:34:09.901 [2024-07-12 09:39:55.998057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.901 [2024-07-12 09:39:55.998105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.901 [2024-07-12 09:39:55.998124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:09.901 [2024-07-12 09:39:55.998140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:09.901 [2024-07-12 09:39:55.998155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.901 [2024-07-12 09:39:56.000844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.901 [2024-07-12 09:39:56.000874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:09.901 [2024-07-12 09:39:56.000892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:34:09.901 [2024-07-12 09:39:56.000908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.901 [2024-07-12 09:39:56.000938] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:09.901 [2024-07-12 09:39:56.000962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:34:09.901 [2024-07-12 09:39:56.000981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:09.901 [2024-07-12 09:39:56.000998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:09.901 [2024-07-12 09:39:56.001015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:09.901 [2024-07-12 09:39:56.001032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:09.901 [2024-07-12 09:39:56.001049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:09.901 [2024-07-12 09:39:56.001065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.001996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:09.902 [2024-07-12 09:39:56.002685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:09.903 [2024-07-12 09:39:56.002701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b198a0e0-0f51-42a0-ac11-889a0fc09615 00:34:09.903 [2024-07-12 09:39:56.002717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:34:09.903 [2024-07-12 09:39:56.002732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:34:09.903 [2024-07-12 09:39:56.002747] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:34:09.903 [2024-07-12 09:39:56.002774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:34:09.903 [2024-07-12 09:39:56.002790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:09.903 [2024-07-12 09:39:56.002806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:09.903 [2024-07-12 09:39:56.002821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:09.903 [2024-07-12 09:39:56.002835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:09.903 [2024-07-12 09:39:56.002849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:09.903 [2024-07-12 09:39:56.002865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.903 [2024-07-12 09:39:56.002886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:09.903 [2024-07-12 09:39:56.002903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:34:09.903 [2024-07-12 09:39:56.002918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.022569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.903 [2024-07-12 09:39:56.022655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:09.903 [2024-07-12 09:39:56.022677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.614 ms 00:34:09.903 [2024-07-12 09:39:56.022690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.023212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:09.903 [2024-07-12 09:39:56.023244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:09.903 [2024-07-12 09:39:56.023259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:34:09.903 [2024-07-12 09:39:56.023270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.060270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.060344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:09.903 [2024-07-12 09:39:56.060363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.060384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.060467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.060482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:09.903 [2024-07-12 09:39:56.060493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.060504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.060584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.060604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:09.903 [2024-07-12 09:39:56.060616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.060627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.060654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.060667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:09.903 [2024-07-12 09:39:56.060678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.060689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.159592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.159680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:09.903 [2024-07-12 09:39:56.159702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.159729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:09.903 [2024-07-12 09:39:56.246144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:09.903 [2024-07-12 09:39:56.246305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:09.903 [2024-07-12 09:39:56.246397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:09.903 [2024-07-12 09:39:56.246552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:09.903 [2024-07-12 09:39:56.246636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:09.903 [2024-07-12 09:39:56.246713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:09.903 [2024-07-12 09:39:56.246804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:09.903 [2024-07-12 09:39:56.246817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:09.903 [2024-07-12 09:39:56.246828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:09.903 [2024-07-12 09:39:56.246967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 253.984 ms, result 0 00:34:11.278 00:34:11.278 00:34:11.278 09:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:13.180 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:13.180 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:13.180 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:13.180 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:13.438 Process with pid 87045 is not found 00:34:13.438 Remove shared memory files 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 87045 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 87045 ']' 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 87045 00:34:13.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (87045) - No such process 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 87045 is not found' 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_band_md /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_l2p_l1 /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_l2p_l2 /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_l2p_l2_ctx /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_nvc_md /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_p2l_pool /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_sb /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_sb_shm /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_trim_bitmap /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_trim_log /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_trim_md /dev/hugepages/ftl_b198a0e0-0f51-42a0-ac11-889a0fc09615_vmap 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:13.438 00:34:13.438 real 3m12.576s 00:34:13.438 user 2m59.628s 00:34:13.438 sys 0m14.961s 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:13.438 09:39:59 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:13.438 ************************************ 00:34:13.438 END TEST ftl_restore_fast 00:34:13.439 ************************************ 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@1142 -- # return 0 00:34:13.439 09:39:59 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:13.439 09:39:59 ftl -- ftl/ftl.sh@14 -- # killprocess 79327 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@948 -- # '[' -z 79327 ']' 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@952 -- # kill -0 79327 00:34:13.439 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79327) - No such process 00:34:13.439 Process with pid 79327 is not found 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79327 is not found' 00:34:13.439 09:39:59 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:13.439 09:39:59 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88956 00:34:13.439 09:39:59 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:13.439 09:39:59 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88956 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@829 -- # '[' -z 88956 ']' 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:13.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:13.439 09:39:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:13.696 [2024-07-12 09:39:59.856986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:13.696 [2024-07-12 09:39:59.857242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88956 ] 00:34:13.696 [2024-07-12 09:40:00.039833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.953 [2024-07-12 09:40:00.277325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.899 09:40:01 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:14.899 09:40:01 ftl -- common/autotest_common.sh@862 -- # return 0 00:34:14.899 09:40:01 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:15.157 nvme0n1 00:34:15.157 09:40:01 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:15.157 09:40:01 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:15.157 09:40:01 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:15.723 09:40:01 ftl -- ftl/common.sh@28 -- # stores=bcd71ede-1d42-499f-beec-d374d122e021 00:34:15.723 09:40:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:15.723 09:40:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcd71ede-1d42-499f-beec-d374d122e021 00:34:15.981 09:40:02 ftl -- ftl/ftl.sh@23 -- # killprocess 88956 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@948 -- # '[' -z 88956 ']' 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@952 -- # kill -0 88956 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@953 -- # uname 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88956 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:15.981 killing process with pid 88956 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88956' 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@967 -- # kill 88956 00:34:15.981 09:40:02 ftl -- common/autotest_common.sh@972 -- # wait 88956 00:34:17.881 09:40:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:18.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:18.140 Waiting for block devices as requested 00:34:18.399 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:18.399 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:18.399 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:18.657 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:23.925 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:23.925 09:40:09 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:23.925 Remove shared memory files 00:34:23.925 09:40:09 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:23.925 09:40:09 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:23.925 09:40:09 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:23.925 09:40:09 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:23.925 09:40:09 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:23.925 09:40:09 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:23.925 ************************************ 00:34:23.925 END TEST ftl 00:34:23.925 ************************************ 00:34:23.925 00:34:23.925 real 14m49.311s 00:34:23.925 user 17m29.697s 00:34:23.925 sys 1m43.192s 00:34:23.925 09:40:09 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:23.925 09:40:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:23.925 09:40:09 -- common/autotest_common.sh@1142 -- # return 0 00:34:23.925 09:40:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:23.925 09:40:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:23.925 09:40:09 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:34:23.925 09:40:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:34:23.925 09:40:09 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:34:23.925 09:40:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:23.925 09:40:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:23.925 09:40:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:23.925 09:40:09 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:34:23.925 09:40:09 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:34:23.925 09:40:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:23.925 09:40:09 -- common/autotest_common.sh@10 -- # set +x 00:34:23.925 09:40:09 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:34:23.925 09:40:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:23.925 09:40:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:23.925 09:40:09 -- common/autotest_common.sh@10 -- # set +x 00:34:24.860 INFO: APP EXITING 00:34:24.860 INFO: killing all VMs 00:34:24.860 INFO: killing vhost app 00:34:24.860 INFO: EXIT DONE 00:34:25.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:25.702 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:25.702 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:25.702 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:25.702 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:25.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:26.528 Cleaning 00:34:26.528 Removing: /var/run/dpdk/spdk0/config 00:34:26.528 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:26.528 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:26.528 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:26.528 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:26.528 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:26.528 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:26.528 Removing: /var/run/dpdk/spdk0 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62124 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62335 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62556 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62660 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62705 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62835 00:34:26.528 Removing: /var/run/dpdk/spdk_pid62853 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63040 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63136 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63225 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63342 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63437 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63482 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63518 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63586 00:34:26.528 Removing: /var/run/dpdk/spdk_pid63687 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64143 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64218 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64286 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64308 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64445 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64461 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64609 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64625 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64695 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64713 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64777 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64795 00:34:26.528 Removing: /var/run/dpdk/spdk_pid64969 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65011 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65092 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65162 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65199 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65277 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65322 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65364 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65412 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65458 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65505 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65551 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65598 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65639 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65686 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65732 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65779 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65825 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65872 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65913 00:34:26.528 Removing: /var/run/dpdk/spdk_pid65965 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66006 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66056 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66105 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66152 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66199 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66281 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66392 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66570 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66660 00:34:26.528 Removing: /var/run/dpdk/spdk_pid66702 00:34:26.528 Removing: /var/run/dpdk/spdk_pid67170 00:34:26.528 Removing: /var/run/dpdk/spdk_pid67274 00:34:26.528 Removing: /var/run/dpdk/spdk_pid67394 00:34:26.528 Removing: /var/run/dpdk/spdk_pid67453 00:34:26.528 Removing: /var/run/dpdk/spdk_pid67484 00:34:26.528 Removing: /var/run/dpdk/spdk_pid67560 00:34:26.528 Removing: /var/run/dpdk/spdk_pid68192 00:34:26.528 Removing: /var/run/dpdk/spdk_pid68234 00:34:26.528 Removing: /var/run/dpdk/spdk_pid68742 00:34:26.528 Removing: /var/run/dpdk/spdk_pid68842 00:34:26.528 Removing: /var/run/dpdk/spdk_pid68963 00:34:26.528 Removing: /var/run/dpdk/spdk_pid69020 00:34:26.528 Removing: /var/run/dpdk/spdk_pid69047 00:34:26.528 Removing: /var/run/dpdk/spdk_pid69078 00:34:26.528 Removing: /var/run/dpdk/spdk_pid70934 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71081 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71085 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71097 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71142 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71151 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71163 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71208 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71212 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71224 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71269 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71273 00:34:26.787 Removing: /var/run/dpdk/spdk_pid71285 00:34:26.787 Removing: /var/run/dpdk/spdk_pid72630 00:34:26.787 Removing: /var/run/dpdk/spdk_pid72731 00:34:26.787 Removing: /var/run/dpdk/spdk_pid74130 00:34:26.787 Removing: /var/run/dpdk/spdk_pid75470 00:34:26.787 Removing: /var/run/dpdk/spdk_pid75596 00:34:26.787 Removing: /var/run/dpdk/spdk_pid75717 00:34:26.787 Removing: /var/run/dpdk/spdk_pid75843 00:34:26.787 Removing: /var/run/dpdk/spdk_pid75987 00:34:26.787 Removing: /var/run/dpdk/spdk_pid76064 00:34:26.787 Removing: /var/run/dpdk/spdk_pid76206 00:34:26.787 Removing: /var/run/dpdk/spdk_pid76579 00:34:26.787 Removing: /var/run/dpdk/spdk_pid76623 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77095 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77277 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77381 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77498 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77556 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77583 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77885 00:34:26.787 Removing: /var/run/dpdk/spdk_pid77940 00:34:26.787 Removing: /var/run/dpdk/spdk_pid78018 00:34:26.787 Removing: /var/run/dpdk/spdk_pid78401 00:34:26.787 Removing: /var/run/dpdk/spdk_pid78544 00:34:26.787 Removing: /var/run/dpdk/spdk_pid79327 00:34:26.787 Removing: /var/run/dpdk/spdk_pid79468 00:34:26.787 Removing: /var/run/dpdk/spdk_pid79657 00:34:26.787 Removing: /var/run/dpdk/spdk_pid79757 00:34:26.787 Removing: /var/run/dpdk/spdk_pid80156 00:34:26.787 Removing: /var/run/dpdk/spdk_pid80422 00:34:26.787 Removing: /var/run/dpdk/spdk_pid80781 00:34:26.787 Removing: /var/run/dpdk/spdk_pid80976 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81105 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81171 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81307 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81344 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81409 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81605 00:34:26.787 Removing: /var/run/dpdk/spdk_pid81842 00:34:26.787 Removing: /var/run/dpdk/spdk_pid82239 00:34:26.787 Removing: /var/run/dpdk/spdk_pid82683 00:34:26.787 Removing: /var/run/dpdk/spdk_pid83125 00:34:26.787 Removing: /var/run/dpdk/spdk_pid83624 00:34:26.787 Removing: /var/run/dpdk/spdk_pid83772 00:34:26.787 Removing: /var/run/dpdk/spdk_pid83877 00:34:26.787 Removing: /var/run/dpdk/spdk_pid84526 00:34:26.787 Removing: /var/run/dpdk/spdk_pid84609 00:34:26.787 Removing: /var/run/dpdk/spdk_pid85071 00:34:26.787 Removing: /var/run/dpdk/spdk_pid85470 00:34:26.787 Removing: /var/run/dpdk/spdk_pid85976 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86093 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86139 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86210 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86272 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86342 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86549 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86609 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86682 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86768 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86803 00:34:26.787 Removing: /var/run/dpdk/spdk_pid86870 00:34:26.787 Removing: /var/run/dpdk/spdk_pid87045 00:34:26.787 Removing: /var/run/dpdk/spdk_pid87263 00:34:26.787 Removing: /var/run/dpdk/spdk_pid87688 00:34:26.787 Removing: /var/run/dpdk/spdk_pid88132 00:34:26.787 Removing: /var/run/dpdk/spdk_pid88538 00:34:26.787 Removing: /var/run/dpdk/spdk_pid88956 00:34:26.787 Clean 00:34:27.046 09:40:13 -- common/autotest_common.sh@1451 -- # return 0 00:34:27.046 09:40:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:27.046 09:40:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:27.046 09:40:13 -- common/autotest_common.sh@10 -- # set +x 00:34:27.046 09:40:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:27.046 09:40:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:27.046 09:40:13 -- common/autotest_common.sh@10 -- # set +x 00:34:27.046 09:40:13 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:27.046 09:40:13 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:27.046 09:40:13 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:27.046 09:40:13 -- spdk/autotest.sh@391 -- # hash lcov 00:34:27.046 09:40:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:27.046 09:40:13 -- spdk/autotest.sh@393 -- # hostname 00:34:27.046 09:40:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:27.303 geninfo: WARNING: invalid characters removed from testname! 00:34:59.368 09:40:41 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:59.368 09:40:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:02.646 09:40:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:05.923 09:40:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:08.451 09:40:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:11.740 09:40:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:14.317 09:41:00 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:14.317 09:41:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:14.317 09:41:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:14.317 09:41:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.318 09:41:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.318 09:41:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.318 09:41:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.318 09:41:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.318 09:41:00 -- paths/export.sh@5 -- $ export PATH 00:35:14.318 09:41:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.318 09:41:00 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:14.318 09:41:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:35:14.318 09:41:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720777260.XXXXXX 00:35:14.318 09:41:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720777260.2O7emL 00:35:14.318 09:41:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:35:14.318 09:41:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:35:14.318 09:41:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:14.318 09:41:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:14.318 09:41:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:14.318 09:41:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:35:14.318 09:41:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:35:14.318 09:41:00 -- common/autotest_common.sh@10 -- $ set +x 00:35:14.318 09:41:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:35:14.318 09:41:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:35:14.318 09:41:00 -- pm/common@17 -- $ local monitor 00:35:14.318 09:41:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:14.318 09:41:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:14.318 09:41:00 -- pm/common@25 -- $ sleep 1 00:35:14.318 09:41:00 -- pm/common@21 -- $ date +%s 00:35:14.318 09:41:00 -- pm/common@21 -- $ date +%s 00:35:14.318 09:41:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720777260 00:35:14.318 09:41:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720777260 00:35:14.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720777260_collect-vmstat.pm.log 00:35:14.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720777260_collect-cpu-load.pm.log 00:35:15.253 09:41:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:35:15.253 09:41:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:35:15.253 09:41:01 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:35:15.253 09:41:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:15.253 09:41:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:35:15.253 09:41:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:15.253 09:41:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:15.253 09:41:01 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:15.253 09:41:01 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:15.253 09:41:01 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:15.253 09:41:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:15.253 09:41:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:15.253 09:41:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:15.253 09:41:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:15.253 09:41:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:15.253 09:41:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:15.253 09:41:01 -- pm/common@44 -- $ pid=90667 00:35:15.253 09:41:01 -- pm/common@50 -- $ kill -TERM 90667 00:35:15.253 09:41:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:15.253 09:41:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:15.253 09:41:01 -- pm/common@44 -- $ pid=90668 00:35:15.253 09:41:01 -- pm/common@50 -- $ kill -TERM 90668 00:35:15.253 + [[ -n 5204 ]] 00:35:15.253 + sudo kill 5204 00:35:15.262 [Pipeline] } 00:35:15.280 [Pipeline] // timeout 00:35:15.283 [Pipeline] } 00:35:15.298 [Pipeline] // stage 00:35:15.302 [Pipeline] } 00:35:15.316 [Pipeline] // catchError 00:35:15.325 [Pipeline] stage 00:35:15.326 [Pipeline] { (Stop VM) 00:35:15.337 [Pipeline] sh 00:35:15.612 + vagrant halt 00:35:19.806 ==> default: Halting domain... 00:35:25.089 [Pipeline] sh 00:35:25.423 + vagrant destroy -f 00:35:29.618 ==> default: Removing domain... 00:35:29.652 [Pipeline] sh 00:35:29.930 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:35:29.939 [Pipeline] } 00:35:29.956 [Pipeline] // stage 00:35:29.962 [Pipeline] } 00:35:29.979 [Pipeline] // dir 00:35:29.985 [Pipeline] } 00:35:30.001 [Pipeline] // wrap 00:35:30.008 [Pipeline] } 00:35:30.022 [Pipeline] // catchError 00:35:30.031 [Pipeline] stage 00:35:30.033 [Pipeline] { (Epilogue) 00:35:30.047 [Pipeline] sh 00:35:30.323 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:36.914 [Pipeline] catchError 00:35:36.915 [Pipeline] { 00:35:36.927 [Pipeline] sh 00:35:37.203 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:37.459 Artifacts sizes are good 00:35:37.467 [Pipeline] } 00:35:37.484 [Pipeline] // catchError 00:35:37.495 [Pipeline] archiveArtifacts 00:35:37.501 Archiving artifacts 00:35:37.669 [Pipeline] cleanWs 00:35:37.686 [WS-CLEANUP] Deleting project workspace... 00:35:37.686 [WS-CLEANUP] Deferred wipeout is used... 00:35:37.723 [WS-CLEANUP] done 00:35:37.725 [Pipeline] } 00:35:37.741 [Pipeline] // stage 00:35:37.744 [Pipeline] } 00:35:37.761 [Pipeline] // node 00:35:37.765 [Pipeline] End of Pipeline 00:35:37.808 Finished: SUCCESS